Lupa Kata Sandi? Klik di Sini

atau Masuk melalui

Belum Memiliki Akun Daftar di Sini


atau Daftar melalui

Sudah Memiliki Akun Masuk di Sini

Konfirmasi Email

Kami telah mengirimkan link aktivasi melalui email ke rudihamdani@gmail.com.

Klik link aktivasi dan dapatkan akses membaca 2 artikel gratis non Laput di koran dan Majalah Tempo

Jika Anda tidak menerima email,
Kirimkan Lagi Sekarang

How Consumers Can Influence Who Controls AI

25 August 2024 23:34 WIB

Illustration of AI. (Photo: shutterstock.com)

By: Arif Perdana, Associate Professor at Monash University Indonesia, specialising in digital strategy, and data science, and also the director of Action Lab, Indonesia; and Ridoan Karim, Lecturer in Business Law, and Deputy Director of Undergraduate Studies at the School of Business, Monash University Malaysia.

AI development is currently held by a small number of companies. Public vigilance can help ensure they stick to ethical use of the technology.

Warren Buffett got it partly right about AI. The billionaire investor and philanthropist told CNN earlier this year : “We let a genie out of the bottle when we developed nuclear weapons … AI is somewhat similar — it’s part way out of the bottle.”

Buffett’s rationale is that, much like nuclear weapons, AI holds the potential to unleash profound consequences on a vast scale, both for better or worse.

And, like nuclear weapons, AI is concentrated in the hands of the few. In AI’s case, tech companies and nations. This is a comparison that is not often talked about.

As these companies push the boundaries of innovation, a critical question emerges: Are we sacrificing fairness and societal well-being on the altar of progress?

One study suggests that Big Tech’s influence is ubiquitous across all streams of the policy process, reinforcing their position as "super policy entrepreneurs."

This allows them to steer policies to favour their interests, often at the expense of broader societal concerns.

This concentrated power also allows these corporations to mould AI technologies using vast datasets reflective of specific demographics and behaviours, often at the expense of broader society.

The result is a technological landscape that, while rapidly advancing, maybe inadvertently deepening societal divides and perpetuating existing biases.

Ethical concerns

The ethical concerns stemming from this concentration of power are significant.

If an AI model is primarily trained on data reflecting one demographic's behaviour, it may perform poorly when interacting with or making decisions about other demographics, potentially leading to discrimination and social injustice.

This bias amplification is not just a theoretical concern but a pressing reality that demands immediate attention.

Porcha Woodruff, for example, a pregnant Black woman, found herself wrongfully arrested due to a facial recognition error—a stark reminder of AI's real-world consequences.

In healthcare, a widely used algorithm severely underestimated Black patients' needs, leading to inadequate care and perpetuating existing disparities. These cases underscore a troubling pattern: AI systems, trained on biased data, amplify societal inequalities.

Consider the algorithms driving these AI systems, developed mainly within environments that lack sufficient oversight regarding fairness and inclusivity.

Developing bias

Consequently, AI applications in areas such as facial recognition, hiring practices, and loan approvals might develop biased outcomes, affecting underrepresented communities disproportionately.

This risk is accentuated by the business model of these corporations, which emphasises rapid development and deployment over rigorous ethical review, putting profits above proper consideration of the long-term societal impacts.

To counter these challenges, a change in AI development is urgently needed.

Broadening the influence beyond big tech companies to include independent researchers, ethicists, public interest groups and government regulators working collaboratively to establish guidelines which prioritise ethical considerations and societal well-being in AI development would be a good start.

Governments have a pivotal role to play.

Stringent antitrust enforcement would limit big tech’s power and promote competition.

An independent watchdog with authority to sanction Big Tech practices would also help along with increasing public participation in policymaking and requiring transparency in tech companies' algorithms and data practices.

Global cooperation on fostering ethical standards and investments in educational programs to empower citizens to understand the impact of technology on society will further support these efforts.

The academic world, too, can step up. Researchers can advance methods to detect and neutralise biases in AI algorithms and training data. By engaging the public, academia can ensure diverse voices are heard in the shaping of AI policy.

Public vigilance and participation are indispensable for holding companies and governments accountable. The public can exert market pressure by choosing AI products from companies that demonstrate ethical practices.

While regulating AI would help prevent the concentration of its power among the few, antitrust measures which curb monopolistic behaviour, promote open standards, and support smaller firms and startups could help steer AI advancements towards public good.

Unique opportunity

Nonetheless, the challenge remains that developing AI requires substantial data and computational resources, which can be a significant hurdle for smaller players.

This is where open-source AI presents a unique opportunity to democratise access, potentially creating more innovation across diverse sectors.

Allowing researchers, startups, and educational institutions equal access to engage with state-of-the-art AI tools levels the playing field.

The future of AI is not predetermined. Taking action now can shape a technological landscape that reflects our collective values and aspirations, ensuring the benefits of AI are shared equitably across society. The question is not whether we can afford to take these steps but whether we can afford not to.

Originally published under Creative Commons by 360info™.

*) DISCLAIMER

Articles published in the “Your Views & Stories” section of en.tempo.co website are personal opinions written by third parties, and cannot be related or attributed to en.tempo.co’s official stance.



GoTo Partners with Microsoft to Enhance Productivity with GitHub Copilot

1 hari lalu

GoTo Partners with Microsoft to Enhance Productivity with GitHub Copilot

Tech giant GoTo Group has significantly boosted the productivity of its engineering team by partnering with Microsoft Indonesia.


Endang Aminudin Aziz, Sole Indonesian in Time Magazine's 100 Most Influential People in AI 2024

6 hari lalu

Endang Aminudin Aziz, Sole Indonesian in Time Magazine's 100 Most Influential People in AI 2024

Endang Aminudin Aziz was named one of the 100 Most Influential People in AI 2024 by Time Magazine.


Elon Musk Voices Support for California Bill Requiring Safety Tests on AI Models

16 hari lalu

Elon Musk Voices Support for California Bill Requiring Safety Tests on AI Models

Elon Musk says California should pass an AI bill that would require tech companies and AI developers to conduct safety testing on AI models.


Google Brings AI Answers in Search to New Countries

28 hari lalu

Google Brings AI Answers in Search to New Countries

Google parent Alphabet (GOOGL.O) says it is expanding its AI-generated summaries for search queries to six new countries, including Indonesia.


How to Use ChatGPT Effectively for Work, Step by Step

29 hari lalu

How to Use ChatGPT Effectively for Work, Step by Step

Learn how to use ChatGPT effectively for work with this step-by-step guide. Boost productivity and enhance your workflow with practical tips.


Today's Top 3 News: Indonesia Can Use AI to Save Subsidized Fuel Budget Up to US$3.1bn, Minister Says

31 hari lalu

Today's Top 3 News: Indonesia Can Use AI to Save Subsidized Fuel Budget Up to US$3.1bn, Minister Says

Tempo English compiled the top 3 news on Monday, August 12, 2024.


Minister Luhut Backs Artificial Intelligence Use by Pertamina; Claims Subsidized Fuel Efficiency

32 hari lalu

Minister Luhut Backs Artificial Intelligence Use by Pertamina; Claims Subsidized Fuel Efficiency

Luhut Binsar Pandjaitan supports PT Pertamina's utilization of artificial intelligence (AI) technology to regulate subsidized fuel consumption.


Indonesia Can Use AI to Save Subsidized Fuel Budget Up to US$3.1 Billion: Minister

32 hari lalu

Indonesia Can Use AI to Save Subsidized Fuel Budget Up to US$3.1 Billion: Minister

Indonesia can save budget of up to Rp50tn (US$3.1 billion) per year by employing artificial intelligence (AI) to tighten controls of subsidized fuel.


OpenAI Co-founders Brockman and Schulman Leaving Company

32 hari lalu

OpenAI Co-founders Brockman and Schulman Leaving Company

AI company, OpenAI is now starting to be abandoned by its co-founders.


Google Antitrust Ruling May Pose US$20 Billion Risk for Apple

37 hari lalu

Google Antitrust Ruling May Pose US$20 Billion Risk for Apple

Apple's lucrative deal with Google could be under threat after a U.S. judge ruled that the Alphabet-owned search giant was operating illegal monopoly.