Lupa Kata Sandi? Klik di Sini

atau Masuk melalui

Belum Memiliki Akun Daftar di Sini


atau Daftar melalui

Sudah Memiliki Akun Masuk di Sini

Konfirmasi Email

Kami telah mengirimkan link aktivasi melalui email ke rudihamdani@gmail.com.

Klik link aktivasi dan dapatkan akses membaca 2 artikel gratis non Laput di koran dan Majalah Tempo

Jika Anda tidak menerima email,
Kirimkan Lagi Sekarang

Can We Trust Machines Doing the News?

Editor

Laila Afifa

28 March 2023 22:05 WIB

By: Chiara Longoni, a behavioral scientist and Assistant Professor of Marketing at Boston University’s Questrom School of Business.

AI is becoming more prevalent in everyday situations but there is still distrust. The challenge is overcoming that barrier.

When you scan the headlines on your favorite news app each morning, do you ever stop to think who — or what — wrote the story?

The assumption is there are human beings doing the work. But it’s also possible an algorithm wrote it. Artificial Intelligence is capable of producing text, images, and audio with little to no human intervention. For instance, the neural network called Generative Pre-trained Transformer 3 (GPT-3) is capable of producing text — a fictional story, a poem, or even a programming code — virtually indistinguishable from text written by a person.  

Major media outlets such as The Washington Post, The New York Times, and Forbes have automated news production with the aid of generative AI – AI algorithms that autonomously produce textual content. With great advances in machine learning and natural language processing, the difference between content written by a human and content produced by advanced neural networks such as GPT-3 can be indiscernible even in domains quintessentially humanistic like poetry.

As we come to rely more on AI-generated information in everyday settings, the question of trust becomes more important.

Recent studies have examined whether people believe AI-generated news stories or trust AI-generated medical diagnoses.

They found that people are mostly skeptical of AI. A machine can write an accurate story full of facts, but readers will still second-guess its veracity. And while a program can give a more accurate medical analysis than a human, patients are still more likely to go with their (human) doctor’s advice.

The conclusion is people are more likely to distrust AI if it makes a mistake than an individual human. When a reporter makes an error, a reader isn’t likely to think all reporters are unreliable. After all, everyone makes mistakes.  But when AI makes a mistake, we are more likely to mistrust the entire concept. Humans can be fallible and forgiven for being so. Not so machines.

AI content is not generally marked as such. It’s rare for a news organization to flag in the byline that the text was produced by an algorithm. But AI-generated content may lead to bias or misuse, and ethicists and policymakers have advocated for organizations to transparently disclose its use. If disclosure requirements are enforced, future headlines might include a byline that tags AI as the reporter. 

The research examined how disclosing the use of AI in news generation affected news accuracy perceptions. The results strongly corroborated the AI aversion account: disclosing the use of AI led people to believe news items substantially less, a negative effect explained by lower trust toward AI reporters.

Media outlets are faced not only with the challenges of grabbing the attention of readers in a highly competitive digital marketplace but also of earning their trust.

This is true of any organization that uses digital technologies to inform its customers, whether that be a regulator, business, or academic institution. In fact, the robustness of the negative effect of AI in research suggests that AI aversion applies to other domains where AI-generated text is used.

AI is a tool. There certainly needs to be oversight and regulation, but it also has the potential to do a lot of good.

AI could democratize healthcare, for instance by producing an app for a skin cancer risk assessment. People unable to afford a dermatologist, or who simply don’t have access to that type of care, could be alerted to that primary red flag — go check this mole.

AI has the potential to make prescriptively good outcomes available to people who otherwise would not have access to them. And so the question is, if AI has the potential to be used for such positive ends, how can we understand how people view it and how can we foster uptake?

If forced to disclose AI-generated content, what can an organization do to retain trust in the information that it communicates? The answer is not yet clear, but the hope is current findings will raise awareness of the influence of disclosure of AI-generated content on perceived accuracy and trust and encourage greater research on this topic.

When AI serves in an assistant role to a person who retains the veto or remains the ultimate decision-maker, it’s more likely people would be fine with AI doing some things as long as the final call is made by a human being.

Originally published under Creative Commons by 360info™.

*) DISCLAIMER

Articles published in the “Your Views & Stories” section of en.tempo.co website are personal opinions written by third parties, and cannot be related or attributed to en.tempo.co’s official stance.



AI Startup Raises $70mn to Build Humanoid Robots

5 hari lalu

AI Startup Raises $70mn to Build Humanoid Robots

The AI startup develops general-purpose humanoid robots that could work in different environments and handle a variety of tasks


G7 Calls for Adoption of International Technical Standards for AI

10 hari lalu

G7 Calls for Adoption of International Technical Standards for AI

The G7 leaders agree on Friday to create a ministerial forum dubbed the "Hiroshima AI process" to discuss issues around generative AI tools.


Academic Practitioners Speak of Artificial Intelligence's Beneficial Roles

10 hari lalu

Academic Practitioners Speak of Artificial Intelligence's Beneficial Roles

Academic practitioners answered concerns that one-day artificial intelligence (AI) would replace the role of teachers or lecturers.


No Excuse for Harassment and Violence Against Women

13 hari lalu

No Excuse for Harassment and Violence Against Women

Educational stakeholders are required to develop a sensitive gender learning process to prevent and counter harassment and violence against women.


Shell to Use New AI Technology in Deep Sea Oil Exploration

13 hari lalu

Shell to Use New AI Technology in Deep Sea Oil Exploration

Shell Plc (SHEL.L) will use AI-based technology from big-data analytics firm SparkCognition in its deep sea exploration and production.


Brawijaya University, Microsoft Inks MoU on Artificial Intelligence

47 hari lalu

Brawijaya University, Microsoft Inks MoU on Artificial Intelligence

Brawijaya University (UB), and Microsoft Indonesia have agreed to collaborate on artificial intelligence (AI) and digital talent development.


It's Time for Muslims to Protect Children from Marriage

52 hari lalu

It's Time for Muslims to Protect Children from Marriage

In Islam, clergymen interpret how marriage is done, sometimes allowing child brides. Clergywomen want substantive justice to stop the practice.


Stopping Child Marriage Starts with Family

54 hari lalu

Stopping Child Marriage Starts with Family

Child marriage hit high levels in some Indonesian provinces during COVID-19. Families hold the key to ending it.


Why We Need Healthy Forests for Healthy People

21 Maret 2023

Why We Need Healthy Forests for Healthy People

People living in cities have begun to realize how vital access to forests, urban parks, and green spaces is for our mental health and well-being.


The Urban Jungle Needs More Trees

15 Maret 2023

The Urban Jungle Needs More Trees

As cities grow and heat up, it's time to get rid of your air-conditioner and have more plants in your homes.