Social-Media
Elon Musk reacts to 'Bengali' Signboard at London Station
Rupert Lowe, the Member of Parliament (MP) for Great Yarmouth, shared an image on his official X account of a bilingual sign at Whitechapel Station, which has sparked debate. The sign, written in both English and Bengali, has been criticized by some, including Lowe, who believes that signs at London stations should be in English alone.
In his post, Lowe, a Reform UK MP, expressed his opinion that "This is London – the station name should be in English, and English only," which quickly went viral. Elon Musk, the billionaire owner of X, responded with a simple "Yes."
Lowe, who has been supportive of former U.S. President Donald Trump, recently called for Nigel Farage's removal as the leader of Reform UK, while also seemingly endorsing Lowe's views. Some users supported the MP's stance, while others argued that having signs in multiple languages was not an issue.
Musk’s aggressive cost-cutting tactics that shook Washington and Backfired at Twitter
The Bengali signage was installed at Whitechapel Tube station in 2022 to honor the contributions of the Bangladeshi community in East London. The Tower Hamlets council funded the dual-language signs as part of broader station improvements. Whitechapel is home to the largest Bangladeshi community in the UK.
West Bengal Chief Minister Mamata Banerjee praised the initiative, expressing pride that Bengali had been accepted as a language for signage at the station. She highlighted the global significance of the Bengali language, calling the move a "victory of our culture and heritage" and underscoring the importance of diaspora unity.
Source: With inputs from agency
6 days ago
Understanding Zero-Click Hacks: The Growing Cyber Threat to WhatsApp Users
In an era where digital security is paramount, cyber threats are evolving at an alarming pace.
Among the latest and most concerning hacking techniques is the Zero-Click Hack, a sophisticated cyberattack that allows hackers to infiltrate a user's device without any interaction from the victim.
Recent reports indicate that nearly 90 WhatsApp users across more than two dozen countries have fallen victim to this silent yet dangerous hacking method.
What is a Zero-Click Hack?
As the name suggests, a zero-click hack is a form of cyberattack that does not require the user to click on a malicious link, download a file, or take any action.
Unlike traditional phishing attempts that rely on social engineering, these attacks exploit software vulnerabilities to gain unauthorised access.
Hackers typically exploit weaknesses in messaging applications, email clients, or multimedia processing functions, sending malicious electronic documents that compromise devices without requiring any user interaction.
WhatsApp accuses Israeli spyware firm of targeting journalists, activists
In the case of WhatsApp, the attackers took advantage of vulnerabilities in the messaging app, allowing them to gain access to sensitive information.
How Do Zero-Click Attacks Work?
Zero-click attacks work by sending malicious files to targeted individuals. These files are processed by the operating system or application without the user's knowledge, granting hackers access to vital data such as messages, call logs, photos, and even the device’s microphone and camera.
This type of cyberattack is particularly dangerous because it is difficult to detect and prevent. Since there is no need for user interaction, conventional security awareness—such as avoiding suspicious links—does not provide protection against such threats.
The WhatsApp Security Breach
WhatsApp recently disclosed that nearly 90 users had been targeted by hackers using spyware developed by the Israeli company Paragon Solutions. This spyware enabled attackers to infiltrate victims' devices without requiring them to take any action.
Among those affected were journalists and members of civil society. In response, WhatsApp has sent a cease-and-desist letter to Paragon Solutions and has reassured users of its commitment to maintaining privacy and security.
How to Stay Safe from Zero-Click Attacks
While zero-click attacks are highly sophisticated and challenging to prevent, users can take certain precautions to minimise the risk:
Keep Apps Updated: Always update your applications to the latest versions. Updates often include security patches that fix vulnerabilities exploited by hackers.
All social media platforms including Facebook to be unblocked within 2 hours today, Palak says
Enable Automatic Updates: This ensures that your device installs security updates as soon as they become available, reducing the window of opportunity for hackers to exploit vulnerabilities.
Monitor Device Behaviour: Unusual signs, such as sudden battery drainage, unexpected app behaviour, or strange messages from unknown contacts, may indicate a compromise.
Report Suspicious Activity: If you suspect your device has been compromised, report it to your local cybercrime unit immediately.
The Fight Against Cyber Threats
Despite the increasing sophistication of cyberattacks, companies like WhatsApp continue to implement security measures to protect user data. However, digital safety remains a shared responsibility. Users must stay informed about emerging threats and adopt best practices to safeguard their digital presence.
As technology advances, so do the tactics employed by cybercriminals. Zero-click hacks serve as a stark reminder that cybersecurity vigilance is more critical than ever
8 days ago
WhatsApp accuses Israeli spyware firm of targeting journalists, activists
WhatsApp has accused Israeli spyware company Paragon Solutions of targeting nearly 100 journalists and civil society members using its sophisticated spyware, Graphite.
The attacks, reportedly carried out using zero-click methods, have raised fresh concerns about the misuse of commercial surveillance tools and the lack of accountability within the industry.
According to a report by The Guardian, WhatsApp has "high confidence" that around 90 users, including journalists and activists, were targeted and possibly compromised.
Elon Musk's DOGE commission gains access to sensitive Treasury payment systems: AP sources
The company did not disclose the locations of the affected individuals but confirmed that they had been notified of the potential breach. WhatsApp has also sent a cease-and-desist letter to Paragon and is considering legal action against the firm.
Zero-Click Attack and Full Device Access
Graphite, Paragon’s spyware, is reportedly capable of infiltrating a device without requiring any interaction from the victim, making it a particularly dangerous tool for surveillance. Once installed, the software provides complete access to the infected phone, including the ability to read messages sent through encrypted apps such as WhatsApp and Signal.
While the identity of those behind the attacks remains unknown, Paragon Solutions is known to sell its software to government clients. A source close to the company claimed that it has 35 government customers, all of which are democratic nations.
Texas Governor orders ban on DeepSeek, RedNote for government devices
The source further stated that Paragon avoids doing business with countries that have previously been accused of spyware abuse, such as Greece, Poland, Hungary, Mexico, and India.
Growing Scrutiny of Spyware Industry
The incident has intensified scrutiny of the commercial spyware industry. Natalia Krapiva, a senior tech legal counsel at Access Now, commented on the matter, stating, "This is not just a question of some bad apples — these types of abuses are a feature of the commercial spyware industry."
While Paragon had been perceived as a relatively less controversial spyware provider, WhatsApp’s revelations have called that perception into question.
This development follows a recent legal victory for WhatsApp against NSO Group, another Israeli spyware maker. In December, a California judge ruled that NSO was liable for hacking 1,400 WhatsApp users in 2019, violating US hacking laws and the platform’s terms of service.
Italy blocks access to the Chinese AI application DeepSeek to protect users' data
In 2021, NSO Group was also added to the US commerce department’s blacklist due to activities deemed contrary to US national security interests.
WhatsApp’s Response and Future Security Measures
WhatsApp has not disclosed how long the targeted users may have been under surveillance but confirmed that the alleged attacks were disrupted in December. The company is now working to support affected users and reinforce its security measures to prevent future breaches.
As concerns over spyware misuse continue to grow, this latest revelation underscores the need for stricter regulations and international cooperation to curb the abuse of surveillance technologies.
14 days ago
Elon Musk's DOGE commission gains access to sensitive Treasury payment systems: AP sources
The Department of Government Efficiency, run by President Donald Trump's billionaire adviser and Tesla CEO Elon Musk, has gained access to sensitive Treasury data including Social Security and Medicare customer payment systems, according to two people familiar with the situation.
The move by DOGE, a Trump administration task force assigned to find ways to fire federal workers, cut programs and slash federal regulations, means it could have wide leeway to access important taxpayer data, among other things.
The New York Times first reported the news of the group's access of the massive federal payment system. The two people who spoke to The Associated Press spoke on condition of anonymity because they were not authorized to speak publicly.
The highest-ranking Democrat on the Senate Finance Committee, Ron Wyden of Oregon, on Friday sent a letter to Trump's Treasury Secretary Scott Bessent expressing concern that “officials associated with Musk may have intended to access these payment systems to illegally withhold payments to any number of programs.”
“To put it bluntly, these payment systems simply cannot fail, and any politically motivated meddling in them risks severe damage to our country and the economy," Wyden said.
The news also comes after Treasury's acting Deputy Secretary David Lebryk resigned from his position at Treasury after more than 30 years of service. The Washington Post on Friday reported that Lebryk resigned his position after Musk and his DOGE organization requested access to sensitive Treasury data.
Elon Musk’s X to launch Digital Wallet with Visa partnership
“The Fiscal Service performs some of the most vital functions in government," Lebryk said in a letter to Treasury employees sent out Friday. “Our work may be unknown to most of the public, but that doesn’t mean it isn’t exceptionally important. I am grateful for having been able to work alongside some of the nation’s best and most talented operations staff.”
The letter did not mention a DOGE request to access Treasury payments.
Musk on Saturday responded to a post on his social media platform X about the departure of Lebryk: “The @DOGE team discovered, among other things, that payment approval officers at Treasury were instructed always to approve payments, even to known fraudulent or terrorist groups. They literally never denied a payment in their entire career. Not even once."
He did not provide proof of this claim.
Musk clashes with OpenAI CEO Sam Altman over Trump-supported Stargate AI data center project
DOGE was originally headed by Musk and former Republican presidential candidate Vivek Ramaswamy, who jointly vowed to cut billions from the federal budget and usher in “mass headcount reductions across the federal bureaucracy.”
Ramaswamy has since left DOGE as he mulls a run for governor of Ohio.
14 days ago
Families sue TikTok in France over teen suicides they say are linked to harmful content
In the moment when her world shattered three years ago, Stephanie Mistre found her 15-year-old daughter, Marie, lifeless in the bedroom where she died by suicide.
“I went from light to darkness in a fraction of a second,” Mistre said, describing the day in September 2021 that marked the start of her fight against TikTok, the Chinese-owned video app she blames for pushing her daughter toward despair.
Delving into her daughter’s phone after her death, Mistre discovered videos promoting suicide methods, tutorials and comments encouraging users to go beyond “mere suicide attempts.” She said TikTok’s algorithm had repeatedly pushed such content to her daughter.
“It was brainwashing,” said Mistre, who lives in Cassis, near Marseille, in the south of France. “They normalized depression and self-harm, turning it into a twisted sense of belonging.”
Now Mistre and six other families are suing TikTok France, accusing the platform of failing to moderate harmful content and exposing children to life-threatening material. Out of the seven families, two experienced the loss of a child.
Asked about the lawsuit, TikTok said its guidelines forbid any promotion of suicide and that it employs 40,000 trust and safety professionals worldwide — hundreds of which are French-speaking moderators — to remove dangerous posts. The company also said it refers users who search for suicide-related videos to mental health services.
Before killing herself, Marie Le Tiec made several videos to explain her decision, citing various difficulties in her life, and quoted a song by the Louisiana-based emo rap group Suicideboys, who are popular on TikTok.
Her mother also claims that her daughter was repeatedly bullied and harassed at school and online. In addition to the lawsuit, the 51-year-old mother and her husband have filed a complaint against five of Marie’s classmates and her previous high school.
Above all, Mistre blames TikTok, saying that putting the app "in the hands of an empathetic and sensitive teenager who does not know what is real from what is not is like a ticking bomb.”
Scientists have not established a clear link between social media and mental health problems or psychological harm, said Grégoire Borst, a professor of psychology and cognitive neuroscience at Paris-Cité University.
“It’s very difficult to show clear cause and effect in this area,” Borst said, citing a leading peer-reviewed study that found only 0.4% of the differences in teenagers’ well-being could be attributed to social media use.
Read: TikTok-loaded phones listed online for thousands amid app ban
Additionally, Borst pointed out that no current studies suggest TikTok is any more harmful than rival apps such as Snapchat, X, Facebook or Instagram.
While most teens use social media without significant harm, the real risks, Borst said, lie with those already facing challenges such as bullying or family instability.
“When teenagers already feel bad about themselves and spend time exposed to distorted images or harmful social comparisons," it can worsen their mental state, Borst said.
Lawyer Laure Boutron-Marmion, who represents the seven families suing TikTok, said their case is based on “extensive evidence.” The company "can no longer hide behind the claim that it’s not their responsibility because they don’t create the content,” Boutron-Marmion said.
The lawsuit alleges that TikTok’s algorithm is designed to trap vulnerable users in cycles of despair for profit and seeks reparations for the families.
“Their strategy is insidious,” Mistre said. “They hook children into depressive content to keep them on the platform, turning them into lucrative re-engagement products.”
Boutron-Marmion noted that TikTok’s Chinese version, Douyin, features much stricter content controls for young users. It includes a “youth mode” mandatory for users under 14 that restricts screen time to 40 minutes a day and offers only approved content.
“It proves they can moderate content when they choose to,” Boutron-Marmion said. “The absence of these safeguards here is telling.”
A report titled “Children and Screens,” commissioned by French President Emmanuel Macron in April and to which Borst contributed, concluded that certain algorithmic features should be considered addictive and banned from any app in France. The report also called for restricting social media access for minors under 15 in France. Neither measure has been adopted.
TikTok, which faced being shut down in the U.S. until President Donald Trump suspended a ban on it, has also come under scrutiny globally.
The U.S. has seen similar legal efforts by parents. One lawsuit in Los Angeles County accuses Meta and its platforms Instagram and Facebook, as well as Snapchat and TikTok, of designing defective products that cause serious injuries. The lawsuit lists three teens who died by suicide. In another complaint, two tribal nations accuse major social media companies, including YouTube owner Alphabet, of contributing to high rates of suicide among Native youths.
Meta CEO Mark Zuckerberg apologized to parents who had lost children while testifying last year in the U.S. Senate.
In December, Australia enacted a groundbreaking law banning social media accounts for children under 16.
Read more: Trump pauses US TikTok ban with executive order
In France, Boutron-Marmion expects TikTok Limited Technologies, the European Union subsidiary for ByteDance — the Chinese company that owns TikTok — to answer the allegations in the first quarter of 2025. Authorities will later decide whether and when a trial would take place.
When contacted by The Associated Press, TikTok said it had not been notified about the French lawsuit, which was filed in November. It could take months for the French justice system to process the complaint and for authorities in Ireland — home to TikTok’s European headquarters — to formally notify the company, Boutron-Marmion said.
Instead, a Tiktok spokesperson highlighted company guidelines that prohibit content promoting suicide or self-harm.
Critics argue that TikTok’s claims of robust moderation fall short.
Imran Ahmed, the CEO of the Center for Countering Digital Hate, dismissed TikTok’s assertion that over 98.8% of harmful videos had been flagged and removed between April and June.
When asked about the blind spots of their moderation efforts, social media platforms claim that users are able to bypass detection by using ambiguous language or allusions that algorithms struggle to flag, Ahmed said.
The term “algospeak” has been coined to describe techniques such as using zebra or armadillo emojis to talk about cutting yourself, or the Swiss flag emoji as an allusion to suicide.
Such code words "aren’t particularly sophisticated,” Ahmed said. "The only reason TikTok can’t find them when independent researchers, journalists and others can is because they’re not looking hard enough,” Ahmed said.
Ahmed’s organization conducted a study in 2022 simulating the experience of a 13-year-old girl on TikTok.
“Within 2.5 minutes, the accounts were served self-harm content,” Ahmed said. “By eight minutes, they saw eating disorder content. On average, every 39 seconds, the algorithm pushed harmful material.”
The algorithm “knows that eating disorder and self-harm content is especially addictive” for young girls.
For Mistre, the fight is deeply personal. Sitting in her daughter’s room, where she has kept the decor untouched for the last three years, she said parents must know about the dangers of social media.
Had she known about the content being sent to her daughter, she never would have allowed her on TikTok, she said. Her voice breaks as she describes Marie as a “sunny, funny” teenager who dreamed of becoming a lawyer.
“In memory of Marie, I will fight as long as I have the strength,” she said. “Parents need to know the truth. We must confront these platforms and demand accountability.”
21 days ago
All social media platforms including Facebook to be unblocked within 2 hours today, Palak says
All social media platforms including Facebook will be unblocked within two hours on Wednesday.
State Minister for Posts, Telecommunications, and Information Technology Zunaid Ahmed Palak confirmed the development.
Palak shared the update following virtual meeting with representatives from Facebook, TikTok, and YouTube, joining from Bangladesh Telecommunication Regulatory Commission (BTRC) building in Dhaka's Agargaon this morning.
Earlier on July 18, internet services were disrupted and access to social media platforms were blocked.
Read more: Only Youtube gets back to Palak; Facebook, others have till Wed morning
6 months ago
TikTok to start labeling AI-generated content as technology becomes more universal
TikTok will begin labeling content created using artificial intelligence when it's uploaded from certain platforms.
TikTok says its efforts are an attempt to combat misinformation from being spread on its social media platform.
The announcement came on ABC's “Good Morning America” on Thursday.
“Our users and our creators are so excited about AI and what it can do for their creativity and their ability to connect with audiences.” Adam Presser, TikTok’s Head of Operations & Trust and Safety told ABC News. “And at the same time, we want to make sure that people have that ability to understand what fact is and what is fiction.”
TikTok's policy in the past has been to encourage users to label content that has been generated or significantly edited by AI. It also requires users to label all AI-generated content where it contains realistic images, audio, and video.
9 months ago
Anonymous users are dominating right-wing discussions online. They also spread false information
The reposts and expressions of shock from public figures followed quickly after a user on the social platform X who uses a pseudonym claimed that a government website had revealed “skyrocketing” rates of voters registering without a photo ID in three states this year — two of them crucial to the presidential contest.
“Extremely concerning,” X owner Elon Musk replied twice to the post this past week.
“Are migrants registering to vote using SSN?” Georgia Rep. Marjorie Taylor Greene, an ally of former President Donald Trump, asked on Instagram, using the acronym for Social Security number.
Trump himself posted to his own social platform within hours to ask, “Who are all those voters registering without a Photo ID in Texas, Pennsylvania, and Arizona??? What is going on???"
State election officials soon found themselves forced to respond. They said the user, who pledges to fight, expose and mock “wokeness,” was wrong and had distorted Social Security Administration data. Actual voter registrations during the time period cited were much lower than the numbers being shared online.
Stephen Richer, the recorder in Maricopa County, Arizona, which includes Phoenix, refuted the claim in multipleX posts while Janet Nelson, the secretary of state in Texas, issued a statement calling it “totally inaccurate."
Yet by the time they tried to correct the record, the false claim had spread widely. In three days, the pseudonymous user’s claim amassed more than 63 million views on X, according to the platform’s metrics. A thorough explanation from Richer attracted a fraction of that, reaching 2.4 million users.
The incident sheds light on how social media accounts that shield the identities of the people or groups behind them through clever slogans and cartoon avatars have come to dominate right-wing political discussion online even as they spread false information.
The accounts enjoy a massive reach that is boosted by engagement algorithms, by social media companies greatly reducing or eliminating efforts to remove phony or harmful material, and by endorsements from high-profile figures such as Musk. They also can generate substantial financial rewards from X and other platforms by ginning up outrage against Democrats.
Many such internet personalities identify as patriotic citizen journalists uncovering real corruption. Yet their demonstrated ability to spread misinformation unchecked while disguising their true motives worries experts with the United States in a presidential election year.
They are exploiting a long history of trust in American whistleblowers and anonymous sources, said Samuel Woolley, director of the Propaganda Research Lab at the University of Texas at Austin.
“With these types of accounts, there’s an allure of covertness, there’s this idea that they somehow might know something that other people don’t,” he said. “They’re co-opting the language of genuine whistleblowing or democratically inclined leaking. In fact what they’re doing is antithetical to democracy.”
The claim that spread online this past week misused Social Security Administration data tracking routine requests made by states to verify the identity of individuals who registered to vote using the last four digits of their Social Security number. These requests are often made multiple times for the same individual, meaning they do not necessarily correspond one-to-one with people registering to vote.
The larger implication is that the cited data represents people who entered the U.S. illegally and are supposedly registering to vote with Social Security numbers they received for work authorization documents. But only U.S. citizens are allowed to vote in federal elections and illegal voting by those who are not is exceedingly rare because states have processes to prevent it.
Accounts that do not disclose the identities of those behind them have thrived online for years, gaining followers for their content on politics, humor, human rights and more. People have used anonymity on social media to avoid persecution by repressive authorities or to speak freely about sensitive experiences. Many left-wing protesters adopted anonymous online identities during the Occupy Wall Street movement of the early 2010s.
The meteoric rise of a group of right-wing pseudonymous influencers who act as alternative information sources has been more recent. It's coincided with a decline in public trust in government and media through the 2020 presidential election and the COVID-19 pandemic.
These influencers frequently spread misinformation and otherwise misleading content, often in service of the same recurring narratives such as alleged voter fraud, the “woke agenda” or Democrats supposedly encouraging a surge of people through illegal immigration to steal elections or replace whites. They often use similar content and reshare each other's posts.
The account that posted the recent misinformation also has spread bogus information about the Israel-Hamas war, sharing a post last fall that falsely claimed to show a Palestinian “crisis actor" pretending to be seriously injured.
Since his takeover of Twitter in 2022, Musk has nurtured the rise of these accounts, frequently commenting on their posts and sharing their content. He also has protected their anonymity. In March, X updated its privacy policy to ban people from exposing the identity of an anonymous user.
Musk also rewards high engagement with financial payouts. The X user who spread the false information about new voter registrants has racked up more than 2.4 million followers since joining the platform in 2022. The user, in a post last July, reported earning more than $10,000 from X's new creator ad revenue program. X did not respond to a request for comment, which was met with an automated reply.
Tech watchdogs said that while it’s critical to maintain spaces for anonymous voices online, they shouldn’t be allowed to spread lies without accountability.
“Companies must vigorously enforce terms of service and content policies that promote election integrity and information integrity generally,” said Kate Ruane, director of the Free Expression Project at the Center for Democracy and Technology.
The success of these accounts shows how financially savvy users have deployed the online trolling playbook to their advantage, said Dale Beran, a lecturer at Morgan State University and the author of “It Came from Something Awful: How a Toxic Troll Army Accidentally Memed Donald Trump into Office.”
“The art of trolling is to get the other person enraged," he said. "And we now know getting someone enraged really fuels engagement and gives you followers and so will get you paid. So now it’s sort of a business.”
Some pseudonymous accounts on X have used their brands to build loyal audiences on other platforms, from Instagram to the video-sharing platform Rumble and the encrypted messaging platform Telegram. The accounts themselves — and many of their followers — publicly promote their pride in America and its founding documents.
It's concerning that many Americans place their trust in these shadowy online sources without thinking critically about who is behind them or how they may want to harm the country, said Kara Alaimo, a communications professor at Farleigh Dickinson University who has written about toxicity on social media.
“We know that foreign governments including China and Russia are actively creating social media accounts designed to sow domestic discord because they think weakening our social fabric gives their countries a competitive advantage," she said. "And they’re right.”
10 months ago
Facebook, Instagram users will start seeing labels on AI-generated images
Facebook and Instagram users will start seeing labels on AI-generated images that appear on their social media feeds, part of a broader tech industry initiative to sort between what’s real and not.
Meta said Tuesday it's working with industry partners on technical standards that will make it easier to identify images and eventually video and audio generated by artificial intelligence tools.
What remains to be seen is how well it will work at a time when it's easier than ever to make and distribute AI-generated imagery that can cause harm — from election misinformation to nonconsensual fake nudes of celebrities.
“It's kind of a signal that they’re taking seriously the fact that generation of fake content online is an issue for their platforms,” said Gili Vidan, an assistant professor of information science at Cornell University. It could be “quite effective” in flagging a large portion of AI-generated content made with commercial tools, but it won't likely catch everything, she said.
Read: Grameenphone launches ‘AppCity,’ Bangladesh's first cross-platform marketplace
Meta's president of global affairs, Nick Clegg, didn’t specify Tuesday when the labels would appear but said it will be “in the coming months” and in different languages, noting that a “number of important elections are taking place around the world.”
“As the difference between human and synthetic content gets blurred, people want to know where the boundary lies,” he said in a blog post.
Meta already puts an “Imagined with AI” label on photorealistic images made by its own tool, but most of the AI-generated content flooding its social media services comes from elsewhere.
A number of tech industry collaborations, including the Adobe-led Content Authenticity Initiative, have been working to set standards. A push for digital watermarking and labeling of AI-generated content was also part of an executive order that U.S. President Joe Biden signed in October.
Clegg said that Meta will be working to label “images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock as they implement their plans for adding metadata to images created by their tools.”
Google said last year that AI labels are coming to YouTube and its other platforms.
Read: Samsung Electronics Chairman acquitted of financial crimes
"In the coming months, we’ll introduce labels that inform viewers when the realistic content they’re seeing is synthetic,” YouTube CEO Neal Mohan reiterated in a year-ahead blog post Tuesday.
One potential concern for consumers is if tech platforms get more effective at identifying AI-generated content from a set of major commercial providers but miss what's made with other tools, creating a false sense of security.
“There’s a lot that would hinge on how this is communicated by platforms to users,” said Cornell's Vidan. “What does this mark mean? With how much confidence should I take it? What is its absence supposed to tell me?”
1 year ago
Facebook and Instagram users in Europe could get ad-free subscription option, WSJ reports
Meta plans to give Facebook and Instagram users in Europe the option of paying for ad-free versions of the social media platforms as a way to comply with the continent's strict data privacy rules, the Wall Street Journal reported Tuesday.
The company wants to charge users about 10 euros ($10.50) a month to use Instagram or Facebook without ads on desktop browsers, the newspaper reported, citing unnamed people familiar with the proposal. Adding more accounts would cost 6 euros each.
Prices for mobile would be higher, at roughly 13 euros a month, because Meta needs to account for commissions charged by the Apple and Google app stores on in-app payments, the newspaper said.
Also read: Canada's government to stop advertising on Facebook and Instagram after Meta says it will block news
Meta reportedly is hoping to roll out paid subscriptions in the coming months as a way to comply with European Union data privacy rules that threaten its lucrative business model of showing personalized ads to users.
Meta would give users the choice between continuing to use the platforms with ads or paying for the ad-free version, the WSJ said.
"Meta believes in the value of free services which are supported by personalized ads," the company said in a statement to The Associated Press. "However, we continue to explore options to ensure we comply with evolving regulatory requirements. We have nothing further to share at this time."
Also read: Facebook’s importance as source of news sees significant decline in 2023: Reuters Institute Report
The EU's top court said in July that Meta must first get consent before showing ads to users — a ruling that jeopardizes the company's ability to make money by tailoring advertisements for individual users based on their online interests and digital activity.
It's not clear if EU regulators will sign off on the plan or insist that the company offer cheaper versions. The newspaper said one issue regulators have is whether the proposed fees will be too expensive for most people who don't want to be targeted by ads.
Also read: Facebook user data issue: Facebook parent company Meta will pay $725M
1 year ago