On August 9, Kenya, one of Africa’s leading economies, will be electing a new president.
The election will see the winner between 4-time failed presidential candidate Raila Odinga, 77, and deputy president William Ruto, 55, replace President Uhuru Kenyatta who came into power in 2013.
In view of this, different agencies and organisations are preparing to ensure that this landmark election is free and fair, and one of them is Meta-owned Facebook.
The company revealed in a blog post published last week that it’s been “preparing for Kenya’s 2022 election over the past year with the help of a dedicated team that’s working closely with election authorities and trusted partners in the country.”
The blog post, which was written by Meta’s director of public policy, East & Horn of Africa, Mercy Ndegwa, further outlined the different steps Meta is taking to ensure a safe and secure election in Kenya.
Since 2018, Meta established operations centres for major elections around the world, including in Kenya. To monitor the elections, the operations centre in Kenya is made of a team of experts who have spent a significant amount of time in Kenya, according to Meta.
Election campaigns are often riddled with hate speech, misinformation, and malicious targets at candidates. To reduce instances of this happening, Meta shared that it has built more advanced detection technology, increased the size of its global team focused on safety and security to more than 40,000 people, and hired more content reviewers to review content across its apps in more than 70 languages—including Swahili.
For posts that violate Meta’s policies, the company takes a range of actions such as deletion of posts, issuing warning strikes, and temporary or permanent banning accounts from the social network. In the 6 months leading up to April 30, 2022, Meta shared that this range of actions was taken on 42,000 pieces of content that violated its violence and incitement policies, as well as on 37,000 pieces of content that violated Meta’s hate speech policies.
Meta also temporarily reduced the distribution of content from accounts that have repeatedly or severely violated its policies.
Outside punishing bad actors, Meta has organised voter education programmes like My Digital World that are focused on raising awareness amongst youth, teachers, parents on topics such as online safety, privacy, and digital citizenship. The company also partnered with iEARN Kenya to provide teachers and parents with lessons on how to safely guide people through the digital world.
To reach the wider population, especially those in rural areas, Meta is using radio programmes to educate people on hate speech and misinformation in multiple local languages including Luo, Kalenjin, Kikuyu, Swahili as well as English.
Political advertising on social media has been a game changer for campaigns. To reduce manipulating political ads, Meta has enforced that political ads are more transparent by enabling people to know who’s behind the ads, the demographic being targeted, and how much money is being spent on these campaigns. People can also personalise their feeds and choose to see fewer political ads.
“In the six months leading up to April 30, about 36,000 ad submissions targeted to Kenya were rejected before they ran for not completing the authorization process or not attaching a disclaimer,” the blog post read.
Over the years, Facebook, which has 2.93 billion users, has drawn widespread criticism for being a platform that suppresses valuable democratic debate, by its failure to handle misinformation, hate speech and other forms of manipulation. The steps outlined above are a pointer that the social network giant is listening to feedback and addressing those concerns.