Get Focus insights straight to your inbox

Sending...

Please complete all required fields before sending.

Thank you

We look forward to sharing out of the ordinary insights with you

Sorry there seems to be a technical issue

Which of the following statements is true?

 

  • Goldfish have a memory that lasts three seconds
  • Sugar causes hyperactivity
  • Humans use only 10% of their brain

 

If you thought any were ‘facts’, you were wrong – all three are false.

 

A goldfish’s memory lasts months, hyperactivity from sugar is a myth that was developed from a single study in the 1970s and, depending on the task at hand, your brain is continuously firing neurons in different regions of your brain.

 

While shattering these small myths won’t change much in your life, there are some falsehoods that do have an impact in the real world. 

What is behind fake news?

A fake article titled “Breaking: Nancy Pelosi’s Son Was Exec At Gas Company That Did Business In Ukraine” was shared over 100 000 times on Facebook in 2019 when President Trump was going through potential impeachment hearings. Was that an attempt by an unscrupulous individual to defend the President?

 

What about when falsehoods – in other words, fake news – come from reputable sources? For example, The New York Times claimed in August 2020 that South Africa had the fifth highest Covid-19 infection rate. Was there malicious intent or it was a miscalculation of rates? Whatever the reason, the article was fake news. 

Don’t be part of the problem

While many of us are victims of fake news, we are also propagators of it. How many times have you forwarded alarmist WhatsApp messages or shared an article on Facebook from a “trustworthy” news site without digging into the detail?

 

We often spread fake news not out of malice, but out of ignorance and innocence of what the truth is.

A Pew Research Center survey found that “10% of respondents admitted to sharing a news story online that they knew was fake, while 49% had shared news that they later found to be false”.

 

Who can blame us? It is at times beyond an individual citizen’s capability to verify some claims. How can we verify statistics coming out of China that are shared on articles on China Central Television’s (CCTv) Facebook page?

 

Research has shown the Chinese government has an influence on what gets reported – as highlighted in this journal paper.

 

Of course, a media outlet does not have to be state controlled to create dissension or controversy. If you simply switch between Fox News and CNN, you will recognise that the polarisation is palpable. Where there is polarisation, there is bias. Where there is bias, there is the potential for fake news. 

Social media’s role in the spread of fake news

Social media has exacerbated the spread of fake news. Platforms such as Facebook, Twitter, Instagram and WhatsApp are becoming intricately intertwined with the way we communicate and stay in touch as humans. These platforms hold data on how many times we log in, who we speak to and what content we spend our time reading and interacting with.

 

The amount of data these platforms keep on us has given fake news fertile ground to grow deep roots, whether the cause is malicious or innocent ignorance.

 

The platforms give developers application programming interfaces (APIs), which can be used to access data from the platforms’ databases – so anyone can pull, analyse and profile that information. For example, they can figure out your affiliations (religious or political), who you will most likely vote for, and present you with adverts or news articles to subtly convince you to change your mind.

 

This is what Cambridge Analytica did in their efforts to swing votes towards Donald Trump’s favour in US elections back in 2016. 

AI and the fight against fake news

So is the fight against fake news a lost cause? Some people don’t think so. They are taking advantage of the vast data these platforms hold and are coupling that with advanced technologies in the world of artificial intelligence and machine learning to take up the false ‘facts’ that have an impact on the real world.

 

Here are two examples:

1. Grover

 

Grover is a new tool that can detect fake news more reliably than existing technologies can. The solution is fascinating because the tool detects fake news by being so good at writing fake news itself!

 

The University of Washington and Allen Institute for AI trained it to invent untrue news stories by feeding the model a large dataset of real news and it has learned how certain phrases are used, how ideas are linked to topics, the tone of the news outlets and so on. Grover models all this through an intuitive adversarial architecture.

 

To quote the Cornell University’s published paper, “This is done using an ‘adversarial’ system, wherein one aspect of the model (called the generator) generates content and another (called the discriminator) rates how convincing it is — if it doesn’t meet a threshold, the generator tries again, and eventually the whole model learns what is convincing and what isn’t.”

 

Grover was able to start generating its own news articles based on input such as topic and news outlet (for example, a news article about Donald Trump improving the economy in the tone of Fox News).

 

The ability to generate real-sounding news articles is now being flipped on its head by using the detector part of the model that was rating how convincing the news article was during the training phase.

 

2. Start-ups fighting deep fakes

 

To every technological advancement, there is always the good side and, sadly, the potential for abuse. Deep fakes are offering researchers and companies incredible opportunities to create art (for example, Peter Cushing was brought “back to life” in 2016’s Rogue One: A Star Wars Story) or using the algorithms to generate fake brain scans based on real ones that are in turn used to improve brain-tumor detection algorithms.

 

At the same time, the technology is giving mischievous individuals a way to create spurious news in the one format that is largely trusted by many: video. The common phrase is, “Show me a video or it didn’t happen.”

 

Deep fakes are videos or images that are created by AI algorithms in a way that they look very real. For example, you can take one video from Donald Trump giving a speech, superimpose it onto a video of Joe Biden giving a speech and render it with Joe’s mannerisms and voice in such a way that whatever Trump says, it seems like Biden is saying it.

 

In the past, it required a sizeable amount of computing resources to create this effect, but as cloud resources are becoming cheaper and more accessible, it is now relatively easy to create a deep fake video.

 

It is not just speeches that are susceptible to deep fakes: it is now just as easy to superimpose someone’s face on another person’s body and make it look realistic without almost any skill in video editing.

 

The technology behind this type of video manipulation itself is incredible, with many positive use cases. But it has started to add fuel to the fake news fire as highlighted in an article I recently published on LinkedIn discussing the issue of deep fakes.

 

Start-ups such as Oaro, Sentinel, and Sensity are using AI algorithms to fight back. They are training their solutions on large databases of deep fakes with the hope that their trained algorithms will be able to spot similar-looking videos online.

The link between click bait and fake news

The fight against fake news will need all of us to take up arms. We all need to be suspicious, vigilant and hold ourselves accountable.

 

If no one is consuming this content, this fire will just die out. We used to call fake news by a fancy, cute name – clickbait – because it was all about websites just getting clicks with nothing of substance in their articles. Sadly, those clicks we gave them have proved a powerful business case for fake news.

 

Unscrupulous and sometimes even dangerous individuals have figured out that they can publish fake news and have thousands of shares and clicks which, in turn, influence their readers’ minds, behaviour and even morals.

 

President Trump tweeted that he had won the 2020 Presidential Election. Without verification from an official body, that was fake news but it had already riled up his supporters to reject any announcement to the contrary.

 

When an article sounds too outrageous, social media platforms are depending on you to report it. This, in turn, gives more data for their algorithms to improve and to better spot fake articles and thereby reduce their impact in the real world. 

Here are some tips you can use to identify fake news:

As this fight against fake news is going to require action by all of us, we should try to make full use of tools provided by social media platforms themselves.

 

Their algorithms are great to a certain extent, but there are moments where humans are better at discerning what is real and what is false. These are not foolproof tips as fake news generators know how to ‘trick’ the system, but they may get you closer to the truth:

 

1. Keep a cool head. First and foremost, be critical of everything you read online. Stay vigilant and remain objective when you read an article with emotionally charged headlines.

 

2. Look for the blue tick. Is the page sharing the article verified or not? Verified pages have a blue tick on most major platforms. If there is no verified badge, go through the author’s profile and get a sense of what they publish. Are all their articles alarmist? Do their articles contain links to trusted external sources (such as renowned research journals)?

 

3. Cross check. Use Google to check if other news sources have reported on the news. If so, who else has picked up on it? It is true that even reputable news outlets sometimes pick up on fake news published by one of them and create an echo chamber, but it is sometimes better than when it is a group of unknown websites.

 

4. Software as a guard. Install news verification tools such as NewsGuard, which is a browser extension that rates news sources for you based on feedback from professional journalists, credibility and transparency of the outlet. If you do not want to install anything, you can use the website FakerFact that makes use of AI text analysis algorithms to classify text into different categories and have you decide whether the article fits the classification.

 

5. Report articles you feel are false. With enough feedback, social media platforms’ algorithms understand better how to block fake news. Humans working together with these algorithms might be the best solution we have to filter out what is false and what is real.

About the author

Houston Muzamhindo, Data Scientist, Investec Private Bank

Houston Muzamhindo

Data Scientist, Investec Private Bank

I am a Data Scientist within the Analytics Consulting Team in Investec Private Bank where my team focuses on building the machine learning and artificial intelligence capability in the bank, coming up with use cases and implementing them. I founded an edutech startup when I was doing my undergraduate studies and it currently has just over 17,000 students registered, mostly from South Africa.

  • Disclaimer

    Focus and its related content is for informational purposes only. The opinions featured on the site are not to be considered as the opinions of Investec and do not constitute financial or other advice. The information presented is subject to completion, revision, verification and amendment.