Connect with us

TECH

Appeals Court Revives Google Privacy Class Action Lawsuit

Published

on

9th Circuit Overturns Dismissal, Orders Reassessment of User Consent and Data Collection Practices

In a significant legal development, the 9th U.S. Circuit Court of Appeals has revived a class action lawsuit against Google, scrutinizing the company’s data collection practices related to its Chrome browser. The court’s ruling on Tuesday overturns a previous dismissal and mandates a closer examination of whether Google collected personal information without user consent.

The lawsuit, brought by Google Chrome users, alleges that the tech giant collected their personal data despite their decision not to synchronize their browsers with their Google accounts. The plaintiffs argue that Google’s privacy disclosures were misleading, suggesting that users could browse privately without their data being collected.

The 9th Circuit’s decision reflects a 3-0 vote, led by Circuit Judge Milan Smith, who criticized the lower court’s approach. Judge Smith highlighted that the lower court had incorrectly applied Google’s general privacy policy, rather than focusing on the specific promises made about Chrome’s privacy features.

Google had previously settled a separate lawsuit concerning Chrome’s “Incognito” mode, agreeing to destroy billions of records and face individual lawsuits from users who believed their private browsing sessions were being tracked. Despite this settlement, the revived class action addresses additional concerns about data collection in non-synced Chrome browsers.

The plaintiffs’ legal representative, Matthew Wessler, expressed satisfaction with the appellate court’s decision, anticipating a trial to further explore the issues raised. The class action now includes Chrome users from July 27, 2016, who opted not to sync their browsers with Google accounts.

Google responded to the ruling by defending its practices, stating, “We disagree with this ruling and are confident the facts of the case are on our side.” The company emphasized that Chrome Sync is designed to enhance user experience across devices and that users have clear privacy controls over their data.

The appeals court’s decision challenges the interpretation that Google’s general privacy policy covers all aspects of data collection. Judge Smith pointed out that Google’s promotional materials for Chrome implied that certain information would not be transmitted unless users activated the sync feature. This implication could lead reasonable users to believe their data was not being collected in the way alleged by the plaintiffs.

The case has been remanded to U.S. District Judge Yvonne Gonzalez Rogers in Oakland, California, who initially dismissed the lawsuit in December 2022. The decision underscores ongoing concerns about privacy and consent in digital services, particularly regarding how tech companies handle user data.

Following the Incognito mode settlement, many users have pursued individual lawsuits in California courts. The revived class action will now examine broader allegations of privacy violations in Google’s Chrome browser.

TECH

Intel’s $3.5 Billion Bonanza: U.S. Chips In to Bolster Military Tech Amidst Company Struggles

Published

on

Despite internal turmoil and global competition, Intel secures a massive government grant to bolster U.S. chip production for defense purposes.

Intel is reportedly on the brink of securing a monumental $3.5 billion in government grants to establish advanced chip manufacturing facilities. This substantial funding, reported by Bloomberg, is part of a broader U.S. initiative aimed at reducing reliance on foreign chip producers and boosting local production capabilities.

The agreement, which could be officially announced within days, will see Intel expand its operations with new facilities across multiple U.S. states, including a major plant in Arizona. This project is envisioned to enhance the production of cutting-edge computer chips for both civilian and military applications, reinforcing the U.S. Department of Defense’s technological edge.

Intel’s anticipated windfall comes on top of the $8.5 billion in grants and $11 billion in loans it received earlier this year under the CHIPS and Science Act. This legislation, championed by President Joe Biden, was designed to revitalize the American semiconductor industry and reduce dependency on Asian manufacturers.

Yet, Intel’s path to this financial boon has not been without obstacles. The selection process was fraught with pressure from rival chip makers, concerns about over-reliance on a single company, and bureaucratic delays that almost threatened to trim Intel’s grant. Despite these challenges, Intel emerged as the frontrunner, a testament to its strategic positioning in the semiconductor market.

The decision to funnel billions into Intel is especially notable given the company’s current struggles. August saw Intel grappling with disappointing second-quarter results and announcing a drastic 15% reduction in its workforce. The company’s board has been deliberating severe measures to stabilize its position, including pausing costly factory projects, divesting from divisions like Mobileye, and even considering a split of its core operations.

Amidst these financial pressures and strategic recalibrations, Intel’s commitment to expanding domestic chip production reflects a critical shift in U.S. defense and technology policy. The grants not only underscore the government’s push to strengthen national security but also highlight the precarious balancing act between fostering innovation and managing corporate instability.

Intel’s substantial government funding marks a pivotal moment in the semiconductor sector, potentially setting a precedent for future public-private partnerships aimed at fortifying American technological capabilities. As the announcement looms, the industry will be watching closely to see how Intel navigates its dual challenge of managing its internal turmoil while leading a transformative initiative for U.S. military tech.

Continue Reading

TECH

How Propaganda Giants Handle U.S. Elections: A Study of Chinese and Russian Media Strategies

Published

on

A Deep Dive into the Selective Coverage and Underlying Agendas of Beijing and Moscow’s Media Outlets

In the recent presidential debate between Kamala Harris and Donald Trump, a curious pattern emerged: while the debate generated significant buzz across the United States and Europe, it barely registered on the radar of Beijing and Moscow’s state-run media. This quiet response stands in stark contrast to the extensive coverage of the previous debate between Joe Biden and Trump, which was a focal point for Chinese and Russian outlets alike.

Chinese media’s subdued coverage of the Harris-Trump debate is telling. The extensive, albeit critical, coverage of Biden’s debate performance in June showcased the Chinese Communist Party’s (CCP) strategy of amplifying perceived democratic failures. Biden’s stumble was leveraged to cast doubt on the efficacy of democratic governance, a recurring theme in Chinese state media. Yet, with Harris and Trump, the coverage was conspicuously muted.

China media analysts suggest that this shift may be due to the CCP’s cautious approach to evolving foreign policy narratives. “China is likely still calibrating its stance following Biden’s abrupt policy shifts,” says Kenton Thibaut, a senior resident China fellow at the Atlantic Council’s Digital Forensic Research Lab. Thibaut points out that the reduced coverage reflects a cautious, fact-based reporting style until the CCP can formulate a coherent narrative.

Another dimension of this media strategy involves China’s discomfort with democratic successes. Anne-Marie Brady, a professor of Chinese politics, and Jonathan Hassid, an Iowa State University professor, both highlight how Chinese media tend to spotlight democratic failures while downplaying successes. In contrast, a more positive portrayal of democratic processes might not align with the CCP’s narrative, which often focuses on criticizing the flaws of Western democracies.

Chinese Foreign Ministry spokesperson Mao Ning’s dismissal of the debate as “the United States’ own affairs” further underscores this hands-off approach, revealing Beijing’s preference to sidestep direct engagement with U.S. election matters.

Similarly, Russian state media has adopted a subtle but strategic approach. According to Darren Linvill, co-director of Clemson University’s Media Forensics Hub, Russian outlets like RT and Sputnik have been cautious with their coverage. While avoiding overt criticism, these outlets subtly downplay Harris and highlight Trump. For instance, some articles downplayed Harris’s performance, while others indulged in less direct commentary, such as suggestions about her “imposter syndrome.”

This restrained yet pointed coverage aligns with Moscow’s known preference for Trump, reflecting Russia’s strategic interests in fostering divisive narratives within the U.S. Recent accusations from the U.S. Justice Department about Russian operatives attempting to influence American media further emphasize the ongoing manipulation of narratives by Moscow.

The under-the-radar coverage by both Chinese and Russian media illustrates a broader strategy: avoiding direct engagement while subtly shaping global perceptions. The post-debate period is crucial for monitoring how these narratives evolve, particularly as information and disinformation campaigns ramp up.

As the election cycle continues, the strategic omissions and selective portrayals by Beijing and Moscow underscore the complexities of international media influence. This selective coverage not only highlights their biases but also serves as a reminder of the broader geopolitical chess game being played on the global stage.

Continue Reading

Military

As Global Powers Battle Over AI in Warfare, Who Will Define the Rules?

Published

on

AI’s Battlefield: The Race to Control Military’s New Frontier

The world is on the brink of a high-stakes showdown over artificial intelligence (AI) in warfare, with the specter of a new arms race looming large. The 2020s have ushered in an era of unprecedented transformation, where AI’s dual-use nature—serving both civilian and military purposes—has sparked urgent debates about global governance. As nations scramble to integrate AI into their defense systems, the quest to regulate this powerful technology has never been more critical—or more contentious.

The integration of AI into military operations is akin to the advent of nuclear weapons, raising fears of doomsday scenarios and global instability. The urgency for a unified framework to govern military AI is palpable, as countries race to secure their technological edge. Despite some progress, such as the European Union’s AI Act and a UN General Assembly resolution, these initiatives fall short of addressing the rapid pace of AI development in warfare.

Since 2023, two significant frameworks have emerged: the REAIM Summit and the U.S.-led Political Declaration. The REAIM Summit, a Dutch-South Korean initiative, represents a bottom-up approach. It’s a sprawling attempt to gather 2,000 participants from 100 countries to debate and shape norms for military AI. The “Call to Action” from this summit aims to create a comprehensive framework through regional workshops and further discussions in Seoul in 2024. Its inclusive stance is meant to foster global collaboration but could lead to slow, fragmented progress.

In contrast, the U.S. Political Declaration is a top-down approach, directly addressing sovereign states. Launched in February 2024, it’s backed by 54 countries, including nearly all EU member states. The declaration outlines ten measures and six pledges to regulate military AI. Yet, its effectiveness is in question, given potential shifts in U.S. leadership and the geopolitical tensions with China and Russia. Both superpowers view AI as a game-changer, with Russia accelerating its AI efforts despite ongoing conflict in Ukraine, and China eyeing AI as a strategic asset in its regional ambitions.

The challenge of achieving a universally agreed-upon convention is daunting. The rapid evolution of AI outpaces traditional arms control measures, making prolonged negotiations seem futile. While the REAIM Summit provides a platform for broader engagement, the Political Declaration serves as a pragmatic, albeit less ambitious, attempt to set international norms. However, the lack of support from major powers and the Global South complicates the process.

Europe, despite lagging behind the U.S., China, and Russia in military AI, has a pivotal role to play. The EU’s Defence Innovation Office in Kiev highlights its commitment to understanding and leveraging military AI insights. For Europe, the stakes are high. By aligning with REAIM and advocating for the Political Declaration, Europe could play a crucial role in shaping a global governance framework for military AI, potentially tempering the rise of a new arms race.

As the global community grapples with the implications of military AI, the urgency for effective regulation is undeniable. Europe must lead the charge in making military AI governance a priority, balancing the ambitions of the REAIM Summit with the practicalities of the Political Declaration. The question remains: can the world’s powers find common ground before the technology they seek to control accelerates beyond their grasp?

Congress’s War on China: Biotech, Drones, and Farmland Under Siege

US Offers $10 Million Reward For Info on Russian Hackers

Australia Accuses China of Cyber Espionage

Continue Reading

Digital

South Africa’s AI Initiative Aims to Combat Violent Incitement

Published

on

How Media Monitoring Africa’s New Tool Could Revolutionize Safety—and Raise Free Speech Concerns

Media Monitoring Africa (MMA) is rolling out an artificial intelligence tool aimed at detecting and flagging social media content that could incite violence. The initiative, named Insights into Incitement (I3), represents a significant leap in how technology is harnessed to prevent societal unrest—yet it raises profound questions about its implications for free speech.

I3 is designed to sift through an array of text data, including social media posts, news articles, and political commentaries, to identify and assess comments that might incite violence. It uses a sophisticated algorithm to rank the risk of these posts, marking them in red, yellow, or green based on their potential danger. The tool is accompanied by an online dashboard, offering a transparent, searchable interface for monitoring these flagged contents.

This initiative emerged from the aftermath of the severe violence that rocked South Africa’s KwaZulu-Natal and Gauteng provinces in 2021, triggered by former President Jacob Zuma’s imprisonment. The ensuing riots, which resulted in 300 deaths and substantial property damage, exposed the role social media played in fueling unrest. MMA’s response aims to preempt such crises by targeting the incendiary content that sparks these disturbances.

At its core, I3 seeks to address the rising threats faced by minorities and vulnerable groups, including women, who are often targeted by hate-fueled rhetoric. “At particular risk are minorities, fueled by xeno- and Afrophobia as well as vulnerable groups,” the project’s designers note.

Yet, as the technology progresses, so does the potential for controversy. The tool’s training involves recognizing and flagging inciting phrases—a process that, while rigorous, might also capture benign discussions or legitimate dissent. Critics argue that such systems could inadvertently stifle free speech if not carefully managed.

The expansion of AI tools like I3 across Africa also presents a layer of irony. As AI continues to be deployed to identify and combat disinformation, there is a risk that the very technology could be misused to propagate false or harmful narratives. Recent reports, such as one from Freedom House, highlight the dual-edged nature of AI in disinformation: while it can combat fake news, it also has the potential to generate or amplify it.

South African attorney and tech law expert Zinhle Novazi, who lectures at Stellenbosch University, supports the tool’s intent but also raises concerns. On LinkedIn, Novazi emphasized that while I3 can significantly reduce response times to potential threats, ensuring the tool does not infringe on legitimate speech is crucial. “The challenge lies in ensuring that the tool is used responsibly and does not infringe upon legitimate expressions of opinion or dissent,” she cautions.

As South Africa pioneers this AI-driven approach to public safety, the debate is just beginning. The balance between leveraging technology for security and safeguarding freedoms will be critical as I3 and similar tools become integral to managing the digital landscape. This innovation promises to enhance safety, but it also underscores the need for rigorous oversight to prevent potential overreach and protect democratic principles.

Continue Reading

TECH

U.S. Official Talks Responsible Military AI Use in Nigeria

Published

on

Mallory Stewart’s Visit Highlights Commitment to Safe AI Integration in Africa

Mallory Stewart, Assistant Secretary of State for the Bureau of Arms Control, Deterrence and Stability, visited Nigeria this week to engage with local and regional authorities on the responsible use of artificial intelligence (AI) in military operations. The visit marks a significant step in the United States’ efforts to enhance security cooperation in Africa, reflecting a broader commitment to international norms and ethical considerations in military technology.

Stewart’s two-day visit included discussions with Nigerian officials and members of the Economic Community of West African States (ECOWAS). The meetings focused on the integration of AI in military contexts, emphasizing adherence to international laws and addressing inherent human biases in AI systems.

“We’ve learned the hard way that AI systems can reflect human biases, which may lead to misinformation being provided to decision-makers,” Stewart said. “Our goal is to collaborate with as many countries as possible that are integrating AI into their military operations, to minimize associated risks.”

The U.S. government’s initiative includes working with 55 nations, including those in Africa, to establish frameworks for the responsible use of military AI. This is part of a broader effort to enhance global security and ethical standards in technological advancements.

Nigeria, along with other African nations, is actively exploring the use of AI in its military operations. The country has faced significant security challenges, with sub-Saharan Africa identified as a terrorism hotspot in the Global Terrorism Index report, accounting for nearly 60% of terror-related deaths. While it remains unclear if terror groups are using AI, Nigeria is pushing for AI integration to improve its security capabilities.

Security analyst Kabiru Adamu from Beacon Consulting noted the potential benefits of AI in military operations. “Given the U.S.’s advanced technological capacity, their support could be invaluable for Nigeria, especially if they can tailor their assistance to the unique aspects of Nigeria’s security landscape,” Adamu said. He highlighted the need for adequate supporting infrastructure, such as reliable power sources, to effectively implement AI technologies.

Senator Iroegbu, founder of Global Sentinel online magazine, also emphasized the need for cautious and strategic implementation of AI. “While AI can reduce the number of troops needed and improve intelligence gathering, it’s crucial for Nigeria to develop its own policies and strategies for AI. Increased awareness and policy development are essential,” Iroegbu said.

In June, African ministers endorsed a landmark continental AI strategy aimed at advancing Africa’s digital future. Last week, the African Union approved the adoption of AI across public and private sectors in member states, including Nigeria. This marks a significant step in integrating AI into broader development and security strategies across the continent.

Stewart’s visit underscores the importance of international collaboration and responsible AI practices as African nations navigate the complex landscape of military technology and regional security challenges.

Continue Reading

TECH

Meta Introduces Monetization for Facebook Creators in Kenya

Published

on

Meta Unleashes Monetization for Kenyan Creators in a Controversial Bid to Dominate Africa’s Digital Landscape

Meta has shaken the digital landscape, unveiling monetization features that finally allow Kenyan creators to earn from their short-form videos on Facebook. This audacious move by the social media giant promises to disrupt the status quo and ignite a new wave of creativity across Africa. But what does it really mean for the vibrant Kenyan creator community and the global market?

In a much-anticipated announcement, Meta introduced two lucrative monetization options: in-stream ads that play before, during, or after videos, and ads on reels that accompany short clips. Kenya now joins an exclusive club of 12 African countries where Meta shares ad revenue with creators, including Egypt, Nigeria, Rwanda, Ghana, and the Seychelles.

“This expansion will empower eligible creators in the vibrant creative industry in Kenya to earn money, whilst setting the bar high for creativity across the world and making Meta’s family of apps the one-stop-shop for all creators,” declared Moon Baz, Meta’s global partnerships lead for Africa, the Middle East, and Turkey.

The journey to extend Meta’s monetization capabilities to Kenya began in March when Nick Clegg, Meta’s President of Global Affairs, visited the country and met with President William Ruto. This strategic engagement laid the groundwork for the features initially planned to roll out by June on both Facebook and Instagram. However, Instagram creators will have to wait longer as the announcement only covered Facebook, raising eyebrows and sparking speculation about Meta’s true intentions.

To qualify for this new revenue stream, creators must have at least 5,000 followers on Facebook and accumulate over 60,000 minutes of watch time in the past two months. These stringent criteria ensure that only the most dedicated and popular creators reap the benefits, pushing others to elevate their game.

Facebook reigns supreme as the most popular social media platform in Kenya, boasting usage by at least 52% of Kenyans aged 15 and above, according to the latest statistics from the Communications Authority of Kenya. WhatsApp follows closely, used by 48.5% of the population, while Instagram lags at 11.5%. The introduction of these monetization features on Facebook, and not Instagram, hints at Meta’s strategic prioritization, but leaves Instagram creators in suspense.

Currently, only YouTube and X (formerly Twitter) share ad revenue with creators, making Meta’s move a potential game-changer. This development is bound to stir controversy, with some viewing it as a necessary evolution, while others might see it as a calculated ploy to tighten Meta’s grip on Africa’s digital economy.

Meta’s decision to monetize short-form videos in Kenya is a bold statement. It underscores the company’s recognition of the continent’s untapped potential and its creators’ burgeoning talent. But it also raises questions about the broader implications for competition, content diversity, and the financial dynamics within the creator economy.

As Kenyan creators begin to harness these new tools, the digital world watches with bated breath. Will this lead to an explosion of creative content and financial independence for African creators, or is it a strategic maneuver by Meta to monopolize Africa’s digital narrative?

One thing is certain: the stakes have never been higher. Meta’s ambitious move has set the stage for a dramatic shift in how content is created, consumed, and monetized in Kenya and beyond. The impact of this decision will ripple across the digital landscape, forever altering the fabric of Africa’s creative economy.

Continue Reading

Editor's Pick

Google Loses Landmark Antitrust Case Over Search Dominance

Published

on

Judge Mehta’s decision could redefine the internet landscape and curb Google’s tech dominance.

U.S. District Judge Amit Mehta ruled that Google’s search engine has been unlawfully leveraging its market dominance to quash competition and stifle innovation. This landmark decision arrives nearly a year after the U.S. Justice Department launched the nation’s most significant antitrust case in a quarter-century against the tech behemoth.

Judge Mehta’s 277-page ruling, emerging three months after closing arguments, thoroughly dissects the tactics Google has employed to maintain its stranglehold on the search market. “After having carefully considered and weighed the witness testimony and evidence, the court reaches the following conclusion: Google is a monopolist, and it has acted as one to maintain its monopoly,” Mehta declared.

This ruling is a dramatic setback for Google and its parent company, Alphabet Inc., which has long argued that its dominance is purely a reflection of its superior product. Google’s search engine, processing approximately 8.5 billion queries daily, has become synonymous with internet searches globally.

Google’s defense hinged on consumer preference, citing its unmatched efficiency as the reason for its market position. Yet, Mehta’s ruling underscores that Google’s grip on the market is not just about quality but about strategic moves that prevent competitors from gaining a foothold. The Justice Department’s case painted Google as a ruthless corporate entity that has systematically obliterated competition to protect its digital advertising empire, which raked in nearly $240 billion last year.

Central to the court’s decision is Google’s practice of paying billions to be the default search engine on new devices, a strategy that effectively sidelines competitors. In 2021 alone, Google shelled out over $26 billion to secure these default agreements. Critics argue that these practices inflate advertising costs and hinder consumer choice.

Google ridiculed these claims, pointing to the historical precedent of search engines like Yahoo, which once led the market but fell from grace as Google rose. Yet, Mehta highlighted evidence showing that default settings are pivotal, citing Microsoft’s Bing holding an 80% market share on the Edge browser—proof that competitors can thrive when given a chance.

While acknowledging Google’s superior search capabilities, Mehta’s ruling sets the stage for a new phase where penalties and remedies will be debated to restore competitive balance. This decision could catapult Microsoft’s Bing and other search engines into more significant roles, particularly as artificial intelligence reshapes the tech frontier.

Satya Nadella, Microsoft’s CEO, was a star witness, articulating the challenges Bing faced due to Google’s deals with companies like Apple. Nadella’s frustration was palpable as he described the monopolistic landscape: “You get up in the morning, you brush your teeth, and you search on Google. Everybody talks about the open web, but there is really the Google web.”

Nadella warned that without antitrust intervention, Google’s dominance could become even more unassailable with the rise of AI, potentially stifling future innovation in the search market.

Google, predictably, plans to appeal, potentially escalating the case to the U.S. Supreme Court. This decision vindicates the Justice Department’s efforts to curb Big Tech’s power, a crusade that intensified under President Joe Biden’s administration.

This ruling marks just one battle in Google’s ongoing legal wars. The company faces numerous other antitrust suits both domestically and internationally. A federal trial in Virginia looms on the horizon, challenging Google’s advertising technology monopoly.

The implications of this decision extend beyond Google, potentially setting a precedent for how tech giants operate. Will we witness a more competitive and innovative digital marketplace, or will Google’s deep pockets and legal acumen enable it to maintain its dominance? As the appeal process unfolds, all eyes will be on the courts to see if this ruling signals a new era of accountability and fairness in the tech industry.

Continue Reading

TECH

Meta Takes Down Thousands of Facebook Accounts in Nigeria Engaged in Sextortion Scams

Published

on

Meta Takes Down Thousands of Facebook Accounts in Nigeria Engaged in Sextortion Scams

Major Crackdown on Sextortion Networks Linked to Nigeria’s Yahoo Boys

Meta has removed 63,000 Facebook accounts involved in sextortion scams in Nigeria. The accounts, linked to the Yahoo Boys group, targeted primarily U.S. adult men and some minors. New tools are being deployed to combat these scams.

Meta has dismantled approximately 63,000 Facebook accounts in Nigeria engaged in sextortion scams, targeting mostly adult men in the U.S. and some minors. The accounts were linked to the Yahoo Boys group, known for sexual extortion tactics. This follows high-profile cases and a significant rise in sextortion incidents, with Meta reporting some attempts to the National Center for Missing and Exploited Children. Meta is implementing new Instagram tools to protect users, especially minors, from such threats, including features that blur nudity in direct messages.

Continue Reading

Most Viewed

You cannot copy content of this page