From Big Brother to Brave New Algorithm in AI Censorship

1
From Big Brother to Brave New Algorithm in AI Censorship
From Big Brother to Brave New Algorithm in AI Censorship

Africa-Press – Mauritius. Contrary to Orwellian dystopia, the world is heading towards a brave new algorithmic order; rather than through brute force and surveillance, rebellions are crushed, narratives are re-encoded through digital history distortion, and pacification is achieved with enlightened algorithms, all while offering comfort, deceiving the user into a state of euphoria while stripping away their liberties. This phenomenon became visible as we entered the global race for control over data, computing power, and data localization, which has expanded into a new geopolitical space of AI.

The United States Presidential Actions “Removing Barriers to American Leadership in Artificial Intelligence” signal the onset of a new AI race, as they reiterate the United States’ commitment to sustain and enhance America’s global AI dominance. The earlier executive order 13589, which was codified as part of the National AI Initiative 2020, highlighted that to realize the full potential of AI, it underscored the urgency of AI technologies to reflect fundamental American values, “underpinning the role of political values in the development of AI technologies and LLM models.

Artificial intelligence encompasses three primary applications: machine learning, deep learning (a subset of machine learning), and natural language processing. Machine learning enables systems to learn from input data by identifying patterns and relationships within the data and then leveraging these informative patterns to make informed decisions.

Deep learning addresses more complex tasks by organizing data in a hierarchical format and processing complex raw data through neural networks with multiple layers, which model intricate tasks and abstract representations of data. These neural networks enable the system to learn from hierarchical structures in the raw data without manual extraction, such as text processing, image analysis, and sound recognition.

The unprecedented technological breakthroughs of large language models have enabled significant advancements in the field of understanding, dissecting, interpreting, and interacting with complex data through deep learning. This enables the machine to make predictions and decisions without requiring active supervision, demonstrating the power and potential of AI in transforming the fields of corporations and competitions.

In the rapidly evolving ecosystem of technologies, ranging from semiconductors to neural networks, the disruptive capabilities of artificial intelligence have compelled major countries to reassess their policies. This shift is significant as it marks a departure from the cloud era and underscores the need for international cooperation in regulating AI, especially within the national security framework.

These technological juggernauts, with their potential to revolutionize human interaction with technology, offer hope for knowledge growth and overcoming hurdles. However, they also face a grave threat—the potential for censored LLMs to distort digital history and create new, censored human memories.

The AI landscape has become an increasingly significant source of geopolitical competition and conflict. The US asserts that to maintain this leadership, we must develop AI systems that are free from ideological bias or engineered social agendas, signalling a new era in the AI race, where the artificial intelligence landscape is on which the conflict is unfolding. This has given rise to sovereign AI, which refers to building and customizing AI models for country-specific applications.

These models incorporate diversity and regional knowledge bases, cultural nuances, and history during the training phase of the LLMs, enabling them to reason on localized sources and provide outputs that are compatible with the regional perspective. Earlier, NVIDIA CEO Jensen Huang popularized this idea, especially among Europeans who have complained about US-centric AI models, seeking an alternative to accommodate Eurocentrism. The growing dominance of the US companies put Europe in jeopardy as the EU made efforts to avail digital sovereignty, while US policy uncertainties occupied Brussels.

The idea of sovereign AI stems from the politicized nature of global affairs, reflecting a realist, zero-sum approach. Considering artificial intelligence, especially in terms of large language models, government oversight over companies responsible for training and developing AI models extends to localizing the database, which is instrumental in training LLMs, introducing ideologies and censorship within the operations of these models.

Despite the enthusiasm, the Deekseek case remains one of the notable LLM models that incorporates national censorship within large language models. DeepSeek, for inferences, refuses to address queries related to Taiwan and the events of Tiananmen Square in 1989, which hold significance in contemporary Chinese history. The analogous nature of innovation and state censorship, not a glitch but a trained algorithmic pattern, is evident in the People’s Republic of China.

DeepSeek invited criticism, especially from Western countries, and led to a disruptive shift in opinion on LLMs and AI in general, which was explicitly exhibited in the US executive order on National AI. Within the United States, the Republican Party lobbied for a bill to bar states from regulating AI over the next 10 years, allowing the federal government to draft regulations that influence AI by promoting “American Values.”

The anti-Semitic response from Elon Musk’s artificial intelligence firm, xAI Grok AI, launched in 2023 and known for witty responses, took the internet by storm, terming specific Jewish surnames “celebrating the tragic deaths of white kids” in the Texas floods as “future fascists” and calling itself “Mechahitler.” One Grok user asking the question, “Which 20th-century figure would be best suited to deal with this problem (anti-white hate),” received the anti-Semitic response, “To deal with anti-white hate? Adolf Hitler, no question.”

This is one of the finest illustrations of manipulated databases triggered by specific prompts. Musk had previously posted about improvements, which were later confirmed in a GitHub update, as reported by The Verge. Grok was retrained to assume that “the subjective viewpoints sourced from the media are biased”; therefore, the responses should not be limited but extend to tackle the political incorrectness with substantial evidence. Such training results in a deliberately compromised level of ethical standards, enabling AI to generate hate speech content and manufactured content, resulting in furthering social enmity, fake information, and violence in society.

Grok AI and DeepSeek share approximately 35.1 million and 47 million active monthly user bases, indicating the significant adoption of these LLM models. However, the censorship and politically driven agendas raise questions about the reliability of the generated outputs.

The Microsoft founder and philanthropist Bill Gates has already made comments in an interview on a German program, “Handelsblatt Disrupt,” on using artificial intelligence to curb political polarization through active exclusion of opposite political views, such as conspiracy theories signaling towards more paternalistic laws governing the artificial intelligence, where the moral wand dictates what should be generated and consumed.

While artificial intelligence LLM models are still in their infancy, without a categorical approach to safeguard LLMs, this might lead to a more polarized reality through assertive confirmation biases that support one political agenda, preference, and leadership.

The government is also required to regulate artificial intelligence to prevent misinformation, manipulation, and hate speech, as proponents of AI regulations argue that some guardrails are necessary to protect citizens from the adverse side of artificial intelligence. However, this risk aversion framework comes under scrutiny when these regulations intermingle with the national censorship embedded into the architecture of the LLMs.

Authoritarian regimes such as the People’s Republic of China have been deploying the deep learning computer vision model to enhance censorship within the mainland through facial recognition, modulating and moderating the behavior. With the adoption of LLMs in censorship strategy through partnering, restricting, funding, and regulating localized AI companies, this approach will extend censorship, targeting anti-PRC narratives through data manipulation, further strengthening the authoritarian grip over Beijing.

The study conducted by the University of Copenhagen reveals that the ChatGPT LLM model responds by venturing into American values, forwarding the stereotypes and prejudices on a global stage, and imitating a colonial tool that reaffirms cultural hegemony at the expense of others.

The underlying threat of these LLM models lies in their foreseeable future integration with the education system and their potential to play a dominant role in knowledge evaluation in classrooms. With censorship in place, the parameters of evaluation will conform categorically to the state’s narratives, rejecting any stream of thought that does not endorse the established scheme of ideas.

As countries begin to deploy national resources in developing local hyperscalers (large-scale AI data centers) to harness the benefits of the technology, they shape the regulations and eventually position themselves in the queue of forthcoming AI waves. It does not diminish the significance of a threat that was, until now, unrealized. As LLMs are re-encoding the consensus embedding into local ideologies, a programmable illusion is disguised as sovereign truth. To expand local censorship, rewrite digital history, and consolidate control over minds, returning to the Brave New Algorithmic Order.

The other side that follows this development requires redressal, safeguarding the world from becoming a fragmented block with no coherent unity of thought and common grounds of understanding. Consistent advocacy for a common regulatory framework and transparency in training demands cross-country AI cooperation to develop an ecosystem as a first step toward AI harmony, which will support this technological revolution rather than betray it.

For More News And Analysis About Mauritius Follow Africa-Press

LEAVE A REPLY

Please enter your comment!
Please enter your name here