Placeholder canvas

After TikTok, EU to scrutinise US-based social media apps for data privacy, safeguards against AI

Now that the US House of Representatives has passed a bill that may potentially ban TikTok or force its owner, ByteDance, to sell its stake in the social media platform, other social media platforms too will be under scrutiny by the EU, mainly in regards to how they deal with user data. While TikTok is currently the primary focus of scrutiny regarding alleged ties to the Chinese government, it is not alone in facing international censure.

Over the past year, tech giants including Amazon, Meta (formerly Facebook), Apple, and others have found themselves embroiled in legal battles over content moderation and data privacy issues on both sides of the Atlantic.

This collective scrutiny marks the emergence of a new regulatory consensus in Western nations, indicating a shift towards more stringent oversight of the tech industry.

The European Union (EU) has also taken action against TikTok, launching an investigation into the platform’s alleged failure to protect minors in February. This investigation follows a substantial fine of $372 million imposed on TikTok by the EU just six months earlier for similar violations. Under the EU’s new Digital Services Act, TikTok could face penalties of up to $800 million, or 6% of its global turnover.

With significant elections scheduled in Europe and the United States later this year, consumers should prepare for substantial changes in their online experiences, prompting questions about the future of social media platforms.

The renewed focus on tech companies places increased pressure on TikTok and its parent company, Bytedance. Even if the proposed TikTok ban clears the US Senate, Bytedance would have a five-month window to sell TikTok’s US operations before facing more severe measures.

However, such decisions will likely face legal challenges, as seen with previous attempts by states like Montana to enact bans on TikTok, resulting in disputes over First Amendment rights. The evolving legislative landscape underscores a shift in priorities from protecting freedom of speech to prioritizing user protection, reflecting an international trend towards greater regulatory oversight of online platforms.

The EU’s Digital Services Act is the latest in a series of global regulations designed to address online safety concerns.

These regulations, which place greater platform responsibility, depart from earlier internet legislation. In the 1990s, legislation governing online service providers, such as Section 230 in the United States, focused on extending First Amendment protections.

However, recent events, including public outrage over cases like the Molly Russell inquest and US Senate hearings on online child exploitation, have prompted regulators to emphasize online safety and transparency. The shift reflects growing concerns that platforms prioritizing user acquisition over safety can significantly harm users.

The adoption of regulatory measures extends beyond the EU, with countries like the United States, Australia, Singapore, South Korea, and several Latin American nations rolling out their legislation in recent years. This global consensus marks a significant departure from previous approaches and signifies a unified effort to establish more robust regulatory frameworks to protect internet users.

Despite their immense value to local economies, tech giants like Google, Amazon, Meta, Apple, and Microsoft face increased regulatory scrutiny. Recent fines imposed on Apple under EU antitrust laws and the ongoing enforcement of GDPR demonstrate the growing willingness of regulators to hold tech companies accountable.

Meanwhile, the European Commission is already tightening the screws o big tech companies, including Google, Facebook, and TikTok, with requests for information on how they’re dealing with risks from generative artificial intelligence, such as the viral spread of deepfakes. To that effect, they have shared sent questionnaires about how eight platforms and search engines — including Microsoft’s Bing, Instagram, Snapchat, YouTube, and X, formerly Twitter — are curbing the risks of generative AI.

European users, constituting a significant portion of social media platforms’ user bases and advertising revenue, are particularly influential in shaping regulatory responses.

The evolving relationship between regulators and tech companies is characterized by complexity and interdependence. While calls for regulatory constraints on Big Tech are growing, unilateral bans remain uncertain. Both parties recognize the importance of collaboration, as highlighted by statements from figures like Elon Musk and Mark Zuckerberg.

For users, changes in online features and services are imminent, with potential shifts towards subscription models to offset compliance costs. Such agreements may benefit consumers by promoting digital literacy and safeguarding personal data from exploitation by ad-dependent companies.

As technology and democracy intersect in a pivotal year, the ongoing debate between regulators and online platforms is expected to intensify. The desired outcomes of this ongoing struggle are greater legislative clarity and a safer online environment, promising a future where user protection and online safety are paramount.

Share your love
Facebook
Twitter
LinkedIn
WhatsApp

Newsletter

Follow Us

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

error: Unauthorized Content Copy Is Not Allowed