Connect with us

Digital

Africa Seeks Digital Solutions For Better Revenue Collection

Published

on

As African governments grapple with debt and economic turmoil, digital tools emerge as a beacon of hope for optimizing tax revenue collection.

African governments are turning to digital solutions as a strategic lifeline for improving tax revenue collection. Faced with rising global food and energy prices, many countries on the continent are exploring innovative ways to enhance their financial stability.

In June, Kenya’s Finance Bill 2024 aimed to increase public and business contributions, but the proposal ignited fierce protests. The backlash forced President William Ruto to retract the bill and dismiss his cabinet. In a significant shift, outgoing Finance Minister Njuguna Ndungu has recently questioned the efficacy of new taxes, arguing instead for optimizing existing tax mechanisms.

“High taxes do not necessarily translate to high revenue,” Ndungu stated. “What we need is to optimize each tax instrument.”

A practical approach to this challenge involves enhancing the efficiency of current tax collection systems. One company at the forefront of this digital revolution is N-Soft, which has successfully aided countries like the Democratic Republic of Congo (DRC) and Sierra Leone in automating tax collection through advanced technological services. N-Soft’s intervention in the DRC’s telecom sector, for instance, led to a remarkable 60% increase in tax collection.

Prakash Sabunani, N-Soft’s Senior Vice President, emphasizes the transformative potential of digital tools in revenue generation. “Digital tools are the future,” Sabunani asserts. “With everything converging towards AI, mobile operators, telecom providers, and device manufacturers are all focusing on AI. This new economy is where we need to target our income streams.”

N-Soft advocates for African governments to leverage digital technologies to capture taxes from a range of sectors, including mobile telecommunications, pay TV services, online financial services, and the burgeoning online gaming . By doing so, these nations could significantly boost their revenue streams and reduce their reliance on external loans.

Sabunani urges governments to concentrate on optimizing their internal revenue collection systems before seeking international financial aid. “If governments can optimize their data and revenue collection systems, they wouldn’t need to borrow money,” he argues.

However, challenges remain. McDonald Lewanika of Accountability Lab points to systemic issues such as poor resource allocation and governance failures. He questions why citizens should bear the financial burden of debt incurred due to mismanagement and corruption. “Why should people pay for debts that were not supposed to be incurred in the first place?” Lewanika asks.

The experience of Kenya underscores the stakes involved. The backlash against the Finance Bill highlights the potential social and political fallout of poorly managed tax policies. By adopting effective digital tools for tax collection, African governments might avoid such crises and bolster their financial health.

As Africa navigates its economic challenges, the embrace of digital tax solutions could not only streamline revenue collection but also offer a sustainable path towards financial independence and stability. The continent stands at a crossroads, and the integration of technology into tax systems could be a pivotal factor in shaping its economic future.

Digital

South Africa’s AI Initiative Aims to Combat Violent Incitement

Published

on

How Media Monitoring Africa’s New Tool Could Revolutionize Safety—and Raise Free Speech Concerns

Media Monitoring Africa (MMA) is rolling out an artificial intelligence tool aimed at detecting and flagging social media content that could incite violence. The initiative, named Insights into Incitement (I3), represents a significant leap in how technology is harnessed to prevent societal unrest—yet it raises profound questions about its implications for free speech.

I3 is designed to sift through an array of text data, including social media posts, news articles, and political commentaries, to identify and assess comments that might incite violence. It uses a sophisticated algorithm to rank the risk of these posts, marking them in red, yellow, or green based on their potential danger. The tool is accompanied by an online dashboard, offering a transparent, searchable interface for monitoring these flagged contents.

This initiative emerged from the aftermath of the severe violence that rocked South Africa’s KwaZulu-Natal and Gauteng provinces in 2021, triggered by former President Jacob Zuma’s imprisonment. The ensuing riots, which resulted in 300 deaths and substantial property damage, exposed the role social media played in fueling unrest. MMA’s response aims to preempt such crises by targeting the incendiary content that sparks these disturbances.

At its core, I3 seeks to address the rising threats faced by minorities and vulnerable groups, including women, who are often targeted by hate-fueled rhetoric. “At particular risk are minorities, fueled by xeno- and Afrophobia as well as vulnerable groups,” the project’s designers note.

Yet, as the technology progresses, so does the potential for controversy. The tool’s training involves recognizing and flagging inciting phrases—a process that, while rigorous, might also capture benign discussions or legitimate dissent. Critics argue that such systems could inadvertently stifle free speech if not carefully managed.

The expansion of AI tools like I3 across Africa also presents a layer of irony. As AI continues to be deployed to identify and combat disinformation, there is a risk that the very technology could be misused to propagate false or harmful narratives. Recent reports, such as one from Freedom House, highlight the dual-edged nature of AI in disinformation: while it can combat fake news, it also has the potential to generate or amplify it.

South African attorney and tech law expert Zinhle Novazi, who lectures at Stellenbosch University, supports the tool’s intent but also raises concerns. On LinkedIn, Novazi emphasized that while I3 can significantly reduce response times to potential threats, ensuring the tool does not infringe on legitimate speech is crucial. “The challenge lies in ensuring that the tool is used responsibly and does not infringe upon legitimate expressions of opinion or dissent,” she cautions.

As South Africa pioneers this AI-driven approach to public safety, the debate is just beginning. The balance between leveraging technology for security and safeguarding freedoms will be critical as I3 and similar tools become integral to managing the digital landscape. This innovation promises to enhance safety, but it also underscores the need for rigorous oversight to prevent potential overreach and protect democratic principles.

Continue Reading

Communication

X Edits AI Chatbot After Election Officials Warn of Misinformation

Published

on

Changes to Grok AI Chatbot Follow Warnings from Secretaries of State About Election Misinformation

The social media platform X has made modifications to its AI chatbot, Grok, in response to concerns from election officials about the spread of misinformation. Secretaries of state from Michigan, Minnesota, New Mexico, Pennsylvania, and Washington had alerted Elon Musk to inaccuracies in Grok’s responses regarding state ballot deadlines, particularly following President Joe Biden’s withdrawal from the 2024 presidential race.

In response to the officials’ letter, X has adjusted Grok’s behavior to address election-related queries more responsibly. The chatbot now advises users to consult official voting resources by directing them to Vote.gov for accurate and current information. This change aims to mitigate the spread of misinformation by guiding users to reliable sources, such as CanIVote.org, recommended by the National Association of Secretaries of State.

Despite these adjustments, Grok’s ability to generate misleading AI-created images about elections remains a concern. Users have exploited the chatbot to produce and share fake images of political figures, including Vice President Kamala Harris and former President Donald Trump. These images contribute to a broader issue of misinformation and manipulation on social media platforms.

Grok, available exclusively to X’s premium subscribers, was introduced as a more unconventional AI chatbot by Elon Musk. Musk described it as a system willing to tackle “spicy questions” that other AI platforms might avoid. Since Musk’s acquisition of Twitter in 2022 and its rebranding to X, there have been growing concerns about an increase in hate speech and misinformation, alongside a reduction in content moderation staff.

The incident highlights ongoing challenges in managing misinformation on social media. The evolution of AI technology, particularly in the realm of chatbots and image generation, has raised concerns about the accuracy and reliability of information circulated on these platforms. The updates to Grok are part of a broader effort to address these issues, but experts caution that such measures may not be sufficient given the scale and impact of misinformation.

As the 2024 elections approach, the pressure is mounting on social media platforms to ensure that their systems do not contribute to the spread of false information. The changes to Grok represent a step towards addressing these concerns, but the effectiveness of these measures in preventing the dissemination of misinformation remains to be seen.

X’s updates to its AI chatbot Grok in response to election officials’ warnings are an important development in the fight against misinformation. However, ongoing vigilance and improvements are necessary to address the broader challenges posed by AI and social media in the electoral landscape.

Continue Reading

Most Viewed

You cannot copy content of this page