Social-Media
YouTube to shut down trending page on July 21
YouTube has announced it will shut down its Trending Page on July 21, nearly 10 years after its launch in 2015, citing a significant drop in user engagement.
In a blog post on the YouTube Help page, the company revealed that visits to the Trending Page have decreased sharply over the past five years as users increasingly discover popular content through other features like recommendations, search suggestions, Shorts, comments, and Communities.
“YouTube's Trending Page Help Center has revealed that the Trending Page will be shut down on July 21,” the company stated.
Going forward, YouTube will highlight trending content through YouTube Charts. While currently limited to music, users can explore Trending Music Videos, Weekly Top Podcast Shows, and Trending Movie Trailers. More content categories will be added over time. Gaming videos will continue to appear on the Gaming Explore page.
Is Outlook down? Thousands of users report issues accessing their email
In addition to Charts, YouTube said it will offer personalised video recommendations, allowing a “wider range of popular content” to be shown to users based on individual preferences. Non-personalised trending content will still be available via the Explore Page, creator channels, and subscription feeds.
Content creators have long used the Trending Page to promote videos and monitor viral trends. For them, YouTube said the Inspiration tab in YouTube Studio will continue to offer personalised content ideas.
The platform also announced an update to its monetisation policy, aimed at curbing inauthentic, mass-produced content. The new rules take effect on July 15.
Source: NDTV
3 hours ago
Meta’s new cloud processing feature raises privacy concerns for Facebook users
Facebook users are being urged to exercise caution before enabling a new feature that allows Meta, Facebook's parent company, to access and scan photos stored on their phones — including those never shared on social media platforms.
The development follows growing concerns over Meta’s use of user data, especially after reports confirmed that the company has been training its artificial intelligence (AI) models using publicly shared photos from Facebook and Instagram.
However, recent revelations indicate that Meta now seeks access to private photos stored on users’ devices, according to a report by asianetnews.
A TechCrunch report, cited by India Today and The Verge, explains that some Facebook users recently received pop-up notifications while attempting to upload a story. The notification offered the option to activate a new feature called Cloud Processing, which enables Meta to automatically upload photos from a user’s camera roll to the company’s cloud.
The feature promises to offer users AI-powered creative tools such as photo collages, event recaps, AI-generated filters, and theme-based suggestions for occasions like birthdays or graduations.
While the feature may appear useful and harmless at first glance, experts warn of significant privacy risks. Once activated, users effectively give Meta permission to scan, analyze, and process personal photos stored on their devices, including those never posted online. Meta’s AI system will reportedly examine faces, objects, locations, dates, and even the metadata embedded in those images.
Meta has defended the feature, describing it as an entirely optional service aimed at enhancing user experience. The company says users can turn the feature on or off at any time. “It is an opt-in feature that you can turn on or off at will,” Meta said.
Rise in harmful content on Facebook following Meta's moderation rollback
Despite these assurances, privacy advocates remain concerned, especially considering Meta's history of handling user data. The company recently admitted that it has been using all photos shared publicly on Facebook and Instagram since 2007 to train its generative AI models.
However, Meta has not clearly defined what qualifies as ‘public’ content or what age restrictions apply to its data use policies, raising further questions.
To opt out of Cloud Processing, users can disable the feature through Facebook’s settings. Meta says that if the feature is turned off, any unshared photos uploaded to the cloud will be deleted within 30 days.
As tech companies continue to experiment with the limits of user data collection in the AI era, experts warn that features like Cloud Processing — though presented as tools for user convenience — may quietly expand access to personal data.
Previously, users had to consciously decide to share photos publicly. With Cloud Processing enabled, however, those same photos can be silently uploaded to Meta’s servers, allowing Meta AI to access them.
In this context, experts advise users to carefully review the terms of such features and make informed decisions to protect their privacy.
12 days ago
NetChoice sues Arkansas over social media laws
Tech industry trade group NetChoice filed a lawsuit against the state of Arkansas on Friday, challenging two newly enacted laws that impose restrictions on social media platforms and open the door for parents to sue over harmful content linked to youth suicides.
The lawsuit, filed in federal court in Fayetteville, comes months after a federal judge struck down a previous Arkansas law requiring parental consent for minors to open social media accounts. The new laws were signed earlier this year by Republican Governor Sarah Huckabee Sanders.
“Despite the overwhelming consensus that laws like the Social Media Safety Act are unconstitutional, Arkansas elected to respond to this Court’s decision not by repealing the provisions that it held unconstitutional but by instead doubling down on its overreach,” NetChoice stated in the lawsuit.
Several U.S. states have pursued similar laws, citing concerns over the mental health effects of social media on children. NetChoice — whose members include TikTok, Meta (Facebook’s parent company), and X (formerly Twitter) — had successfully challenged Arkansas’ 2023 age-verification requirement for social media users, which a federal judge blocked and struck down in March.
Similar laws have also been halted by courts in Florida and Georgia.
A spokesperson for Arkansas Attorney General Tim Griffin said the office is reviewing the lawsuit and “looked forward to defending the law.”
One of the laws being challenged prohibits social media companies from using designs, algorithms, or features that they “know or should have known through the exercise of reasonable care” could lead users to die by suicide, purchase controlled substances, develop an eating disorder, or become addicted to the platform.
Rise in harmful content on Facebook following Meta's moderation rollback
NetChoice argues that this provision is unconstitutionally vague and fails to provide clear guidance on what content would violate the restrictions. The lawsuit also points out that the law would affect both minors and adults.
It questions whether certain songs referencing drugs — like Afroman’s “Because I Got High” — would fall under the new restrictions.
The same law allows parents to sue social media companies if their children died by suicide or attempted to do so after exposure to content promoting self-harm or suicide. Companies found in violation could face civil penalties of up to $10,000 per incident.
NetChoice is also contesting a second law that broadens the scope of Arkansas’ restrictions on social media platforms. This measure requires platforms to prevent minors from receiving notifications between 10 p.m. and 6 a.m. It also prohibits companies from designing their platforms to “evoke any addiction or compulsive behavior.”
NetChoice contends the law lacks clarity on how platforms can comply and that the language is so broad, it is unclear what types of content or features would constitute a violation.
“What is ‘addictive’ to some minors may not be addictive to others. Does allowing teens to share photos with each other evoke addiction?” the lawsuit stated.
13 days ago
Rise in harmful content on Facebook following Meta's moderation rollback
Meta's latest Integrity Report shows worrying spike in violent posts and harassment after policy shift aimed at easing restrictions on political expression.
Facebook has seen a notable rise in violent content and online harassment following Meta’s decision to ease enforcement of its content moderation policies, according to the company’s latest Integrity Report.
The report, the first since Meta overhauled its moderation strategy in January 2025, reveals that the rollback of stricter content rules has coincided with a drop in content removals and enforcement actions — and a spike in harmful material on its platforms, including Instagram and Threads.
Meta’s shift, spearheaded by CEO Mark Zuckerberg, was aimed at reducing moderation errors and giving more space for political discourse. However, the company now faces growing concern that the relaxed rules may have compromised user safety and platform integrity.
Violent Content and Harassment on the Rise
The report shows that violent and graphic content on Facebook increased from 0.06–0.07 per cent in late 2024 to 0.09 per cent in the first quarter of 2025. While the percentages appear small, the scale is significant for a platform used by billions.
Likewise, bullying and harassment rates rose in the same period. Meta attributed this to a March spike in violating content, noting a slight rise from 0.06–0.07 per cent to 0.07–0.08 per cent. These increases mark a reversal of a downward trend in harmful content seen in previous years.
Content Removals and Enforcement Plummet
The rise in harmful posts comes as Meta dramatically reduces enforcement activity. Only 3.4 million pieces of content were actioned under its hate speech policy in Q1 2025 — the lowest since 2018. Spam removals also fell sharply, from 730 million at the end of 2024 to 366 million in early 2025. Additionally, the number of fake accounts removed dropped from 1.4 billion to 1 billion.
Meta’s new enforcement strategy focuses primarily on the most severe violations, such as child exploitation and terrorism, while areas previously subject to stricter moderation — including immigration, gender identity, and race — are now framed as protected political expression.
The definition of hate speech has also been narrowed. Under the revised rules, only direct attacks and dehumanising language are flagged. Content previously flagged for expressing contempt or exclusion is now permitted.
Shift in Fact-Checking Strategy
In another major change, Meta has scrapped its third-party fact-checking partnerships in the United States, replacing them with a crowd-sourced system known as Community Notes. The system, now active across Facebook, Instagram, Threads, and even Reels, relies on users to flag and annotate questionable content.
While Meta has yet to release usage data for the new system, critics warn that such an approach could be vulnerable to manipulation and bias in the absence of independent editorial oversight.
Fewer Errors, Says Meta
Despite the concerns, Meta is presenting the new moderation approach as a success in terms of reducing errors. The company claims moderation mistakes in the United States dropped by 50 per cent between the final quarter of 2024 and Q1 2025. However, it has not disclosed how this figure was calculated. Meta says future reports will include more transparency on error metrics.
“We are working to strike the right balance between overreach and under-enforcement,” the report states.
Teen Protections Remain in Place
One area where Meta has not scaled back enforcement is in content directed at teenagers. The company has maintained strict protections against bullying and harmful content for younger users and is introducing dedicated Teen Accounts across its platforms to improve content filtering.
Meta also highlighted growing use of artificial intelligence, including large language models, in its moderation systems. These tools are now exceeding human performance in some cases and can automatically remove posts from review queues if no violation is detected.
As Meta pushes ahead with its looser content policies, experts and users alike will be watching closely to see whether the company can truly balance free expression with safety — or whether its platforms risk becoming breeding grounds for harmful content.
Source: With inputs from agencies
1 month ago
RobinRafan named Best Content Creator of 2025 at Kidlon-Powered 4th BIFA Awards
Content creator RobinRafan was honored with the Best Content Creator of 2025 award at the Kidlon-Powered 4th BIFA Awards, held yesterday night at the BCFCC Hall of Fame.
The award was given by Asif Ahmed, Acting General Manager of Pan Pacific Sonargaon Hotel, alongside veteran actor Azizul Hakim, in recognition of RobinRafan’s creative contributions across digital platforms.
RobinRafan, also known as Obidur Rahman, creates content across various niches including technology, AI, and VFX, and has also been praised for raising social awareness through his work. He remains active on platforms such as Facebook, YouTube, TikTok, and Instagram, where his diverse content has garnered a large following and widespread engagement.
The event saw the presence of numerous well-known figures from the entertainment industry. Among the attendees were Rojina, Porimoni, Tanjin Tisha, Safa Kabir, Afran Nisho, Siyam, Mamnun Hasan Emon, Shahiduzzaman Selim, and Tariq Anam Khan, making it a night full of star power.
Outside the venue, a large crowd gathered to witness the arrival of celebrities on the red carpet.
The evening also featured a fashion show by Nirob and Apu Biswas, as well as dance performances by Prarthona Fardin Dighi and Barisha Haque.
Several other well-known personalities were recognized during the ceremony, including Afran Nisho, Siyam Ahmed, Mamnun Hasan Emon, Singer Imran, Kona, Tanjin Tisha, Mehazabien Chowdhury, and Chanchal Chowdhury.
Speaking at the event, Kidlon's Managing Director, Antu Kareem, remarked that the organization values the efforts of individuals making significant contributions in their respective fields and aims to continue organizing such events to encourage and highlight impactful work.
The 4th BIFA Awards marked a gathering of talent and achievement, with RobinRafan’s recognition highlighting the evolving landscape of content creation in Bangladesh.
1 month ago
TikTok fined $600 million for China data transfers that broke EU privacy rules
TikTok has been fined €530 million ($600 million) by a European Union privacy regulator on Friday, following a four-year probe that concluded the platform’s transfer of user data to China posed potential spying risks and violated the EU's strict data protection laws.
Ireland’s Data Protection Commission, which oversees TikTok’s compliance in the EU due to the company’s European headquarters being located in Dublin, also criticized the platform for failing to clearly inform users about where their data was being sent. The watchdog has given TikTok six months to bring its practices in line with EU standards.
“Tiktok did not sufficiently ensure or prove that the personal information of EU users — accessed remotely by employees in China — received a level of protection comparable to that required within the EU,” said Deputy Commissioner Graham Doyle in a statement.
TikTok said it disagreed with the decision and plans to appeal.
The company said in a blog post that the decision focuses on a “select period” ending in May 2023, before it embarked on a data localization project called Project Clover that involved building three data centers in Europe.
Firefox could vanish if Google loses antitrust battle: Mozilla
“The facts are that Project Clover has some of the most stringent data protections anywhere in the industry, including unprecedented independent oversight by NCC Group, a leading European cybersecurity firm,” said Christine Grahn, TikTok’s European head of public policy and government relations. “The decision fails to fully consider these considerable data security measures.”
TikTok, whose parent company ByteDance is based in China, has been under scrutiny in Europe over how it handles personal information of its users amid concerns from Western officials that it poses a security risk over user data sent to China. In 2023, the Irish watchdog also fined the company hundreds of millions of euros in a separate child privacy investigation.
The Irish watchdog said its investigation found that TikTok failed to address “potential access by Chinese authorities” to European users’ personal data under Chinese laws on anti-terrorism, counterespionage, cybersecurity and national intelligence that were identified as “materially diverging” from EU standards.
Grahn said TikTok has “has never received a request for European user data from the Chinese authorities, and has never provided European user data to them.”
Under the EU rules, known as the General Data Protection Regulation, European user data can only be transferred outside of the bloc if there are safeguards in place to ensure the same level of protection.
Grahn said TikTok strongly disagreed with the Irish regulator’s argument that it didn’t carry out “necessary assessments” for data transfers, saying it sought advice from law firms and experts. She said TikTok was being “singled out” even though it uses the “same legal mechanisms” that thousands of other companies in Europe does and its approach is “in line” with EU rules.
The investigation, which opened in September 2021, also found that TikTok’s privacy policy at the time did not name third countries, including China, where user data was transferred. The watchdog said the policy, which has since been updated, failed to explain that data processing involved “remote access to personal data stored in Singapore and the United States by personnel based in China.”
TikTok faces further scrutiny from the Irish regulator, which said that the company had provided inaccurate information throughout the inquiry by saying that it didn’t store European user data on Chinese servers. It wasn’t until April that it informed the regulator that it discovered in February that some data had in fact been stored on Chinese servers.
Doyle said that the watchdog is taking the recent developments “very seriously” and “considering what further regulatory action may be warranted.”
2 months ago
Google faces off with US government in attempt to break up company in search monopoly case
Google is confronting an existential threat as the U.S. government tries to break up the company as punishment for turning its revolutionary search engine into an illegal monopoly.
The drama began to unfold Monday in a Washington courtroom as three weeks of hearings kicked off to determine how the company should be penalized for operating a monopoly in search. In its opening arguments, federal antitrust enforcers also urged the court to impose forward-looking remedies to prevent Google from using artificial intelligence to further its dominance.
“This is a moment in time, we’re at an inflection point, will we abandon the search market and surrender them to control of the monopolists or will we let competition prevail and give choice to future generations,” said Justice Department attorney David Dahlquist.
The proceedings, known in legal parlance as a “remedy hearing,” are set to feature a parade of witnesses that includes Google CEO Sundar Pichai.
The U.S. Department of Justice is asking a federal judge to order a radical shake-up that would ban Google from striking the multibillion dollar deals with Apple and other tech companies that shield its search engine from competition, share its repository of valuable user data with rivals and force a sale of its popular Chrome browser.
Google’s attorney, John Schmidtlein, said in his opening statement that the court should take a much lighter touch. He said the government’s heavy-handed proposed remedies wouldn’t boost competition but instead unfairly reward lesser rivals with inferior technology.
“Google won its place in the market fair and square,” Schmidtlein said.
‘Flying taxis’ poised to revolutionise urban commuting
The moment of reckoning comes four-and-a-half-years after the Justice Department filed a landmark lawsuit alleging Google’s search engine had been abusing its power as the internet's main gateway to stifle competition and innovation for more than a decade.
After the case finally went to trial in 2023, a federal judge last year ruled Google had been making anti-competitive deals to lock in its search engine as the go-to place for digital information on the iPhone, personal computers and other widely used devices, including those running on its own Android software.
That landmark ruling by U.S. District Judge Amit Mehta sets up a high-stakes drama that will determine the penalties for Google’s misconduct in a search market that it has defined since Larry Page and Sergey Brin founded the company in a Silicon Valley garage in 1998.
Since that austere start, Google has expanded far beyond search to become a powerhouse in email, digital mapping, online video, web browsing, smartphone software and data centers.
Seizing upon its victory in the search case, the Justice Department is now setting out to prove that radical steps must be taken to rein in Google and its corporate parent, Alphabet Inc.
“Google’s illegal conduct has created an economic goliath, one that wreaks havoc over the marketplace to ensure that — no matter what occurs — Google always wins,” the Justice Department argued in documents outlining its proposed penalties. “The American people thus are forced to accept the unbridled demands and shifting, ideological preferences of an economic leviathan in return for a search engine the public may enjoy.”
Although the proposed penalties were originally made under President Joe Biden's term, they are still being embraced by the Justice Department under President Donald Trump, whose first administration filed the case against Google. Since the change in administrations, the Justice Department has also attempted to cast Google's immense power as a threat to freedom, too.
In his opening statement, Dahlquist noted that top officials from the Justice Department were in the room to watch proceedings. He said their presence indicated that the case had the full support of federal antitrust regulators, both past and present.
“The fact that this case was filed in 2020, tried in 2023, under two different administrations, and joined by 49 states demonstrates the non-partisan nature of this case and our proposed remedies,” Dahlquist said.
Dahlquist also said that Mehta would be hearing a lot about AI — “perhaps more than you want, your honor,” — and said top executives from AI companies, like ChatGPT, would be called to testify. He said the court's remedies should include provisions to make sure that Google's AI product, Gemini, isn't used to strengthen its existing search monopoly.
“We believe that Google can and will attempt to circumvent the court's remedies if it is not included,” Dahlquist said. “Gen AI is Google's next evolution to keep their vicious cycle spinning.”
Schmidtlein, Google's attorney, said rival AI companies had seen enormous growth in recent years and were doing “just fine."
Google guilty of ad monopoly, judge rules
Google is also sounding alarms about the proposed requirements to share online search data with rivals and the proposed sale of Chrome posing privacy and security risks. “The breadth and depth of the proposed remedies risks doing significant damage to a complex ecosystem. Some of the proposed remedies would imperil browser developers and jeopardize the digital security of millions of consumers," Google lawyers said in a filing leading up to hearings.
The showdown over Google's fate marks the climax of the biggest antitrust case in the U.S. since the Justice Department sued Microsoft in the late 1990s for leveraging its Windows software for personal computers to crush potential rivals.
The Microsoft battle culminated in a federal judge declaring the company an illegal monopoly and ordering a partial breakup — a remedy that was eventually overturned by an appeals court.
Google intends to file an appeal of Mehta's ruling from last year that branded its search engine as an illegal monopoly but can't do so until the remedy hearings are completed. After closing arguments are presented in late May, Mehta intends to make his decision on the remedies before Labor Day.
The search case marked the first in a succession of antitrust cases that have been brought against a litany of tech giants that include Facebook and Instagram parent Meta Platforms, which is currently fighting allegations of running an illegal monopoly in social media in another Washington D.C. trial. Other antitrust cases have been brought against both Apple and Amazon, too.
Japan's anti-monopoly watchdog accuses Google of violations in smartphones
The Justice Department also targeted Google's digital advertising network in a separate antitrust case that resulted last week in another federal judge's decision that found the company was abusing its power in that market, too. That ruling means Google will be heading into another remedy hearing that could once again raise the specter of a breakup later this year or early next year.
2 months ago
Meta CEO Zuckerberg considered spinning off Instagram in 2018 over antitrust worries, email says
Meta CEO Mark Zuckerberg once considered separating Instagram from its parent company due to worries about antitrust litigation, according to an email shown Tuesday on the second day of an antitrust trial alleging Meta illegally monopolized the social media market.
In the 2018 email, Zuckerberg wrote that he was beginning to wonder if “spinning Instagram out” would be the only way to accomplish important goals, as big-tech companies grow. He also noted “there is a non-trivial chance” Meta could be forced to spin out Instagram and perhaps WhatsApp in five to 10 years anyway.
He wrote that while most companies resist breakups, “the corporate history is that most companies actually perform better after they've been split up.”
Asked Tuesday by attorney Daniel Matheson, who is leading the antitrust case for the Federal Trade Commission, which incidence in corporate history he had in mind, Zuckerberg responded: “I'm not sure what I had in mind then.”
Zuckerberg, who was the first witness, testified for more than seven hours over two days in the trial that could force Meta to break off Instagram and WhatsApp, startups the tech giant bought more than a decade ago that have since grown into social media powerhouses.
While questioning Zuckerberg on Tuesday morning, Matheson noted that he had referred to Instagram as being a “rapidly growing, threatening, network.” The attorney also pointed out Zuckerberg's referring to trying to neutralize a competitor by buying the company.
But Zuckerberg said while Matheson was able to show documents in court that indicated his concern about Instagram's growth, he also had many conversations about how excited his company was to acquire Instagram to make a better product.
Zuckerberg defends Instagram and WhatsApp deals as Meta faces landmark antitrust trial
Zuckerberg also said Facebook was in the process of building a camera app for sharing on mobile phones, and he thought Instagram was better at that, “so I wanted to buy them.”
Zuckerberg also pushed back against Matheson's contention that the reason for buying the company was to neutralize a threat.
“I think that that mischaracterizes what the email was," Zuckerberg said.
In his questioning of Zuckerberg, Matheson repeatedly brought up emails — many of them more than a decade old — written by Zuckerberg and his associates before and after the acquisition of Instagram.
While acknowledging the documents, Zuckerberg has often sought to downplay the contents, saying he wrote them in the early stages of considering the acquisition and that what he wrote at the time didn't capture the full scope of his interest in the company.
Matheson also brought up a February 2012 message in which Zuckerberg wrote to the former chief financial officer of Facebook that Instagram and Path, a social networking app, already had created meaningful networks that could be “very disruptive to us.”
Zuckerberg testified that the message was written in the context of a broad discussion about whether they should buy companies to accelerate their own developments.
Zuckerberg also testified that buying the company, taking it off the market and building their own version of it was “a reasonable thing to do.”
Later Tuesday, Mark Hansen, an attorney for Meta, began his questioning of Zuckerberg. Hansen, in his opening statements Monday, emphasized that Meta's services are free and that the company, far from holding a monopoly, actually has a lot of competition. He made a point of bringing up those issues in just over an hour of questioning Zuckerberg, with more expected to come Wednesday.
“It's very competitive,” Zuckerberg said, noting that charging for using services like Facebook would likely drive users away, since similar services are widely available elsewhere.
7 Warning Signs Social Media Is Affecting Your Child’s Mental Health
The trial is one of the first big tests of President Donald Trump’s FTC’s ability to challenge Big Tech. The lawsuit was filed against Meta — then called Facebook — in 2020, during Trump’s first term. It claims the company bought Instagram and WhatsApp to squash competition and establish an illegal monopoly in the social media market.
Facebook bought Instagram — which was a photo-sharing app with no ads — for $1 billion in 2012.
Instagram was the first company Facebook bought and kept running as a separate app. Until then, Facebook was known for smaller “acqui-hires” — a popular Silicon Valley deal in which a company purchases a startup as a way to hire its talented workers, then shuts the acquired company down. Two years later, it did it again with the messaging app WhatsApp, which it purchased for $22 billion.
WhatsApp and Instagram helped Facebook move its business from desktop computers to mobile devices, and to remain popular with younger generations as rivals like Snapchat (which it also tried, but failed, to buy) and TikTok emerged.
However, the FTC has a narrow definition of Meta’s competitive market, excluding companies like TikTok, YouTube and Apple’s messaging service from being considered rivals to Instagram and WhatsApp.
Apple unlikely to make iPhones in US despite Trump’s China tariffs
U.S. District Judge James Boasberg is presiding over the case. Late last year, he denied Meta’s request for a summary judgment and ruled that the case must go to trial.
2 months ago
7 Warning Signs Social Media Is Affecting Your Child’s Mental Health
In today’s hyper-connected world, children are growing up with screens as constant companions—scrolling, sharing, and seeking approval online. While social media offers scopes of connection and creativity, its darker effects often go unnoticed. Minor shifts in behaviour, mood, and daily habits may indicate underlying emotional distress. Recognising these early warning signs is crucial to safeguarding kids’ mental health and overall well-being. Let’s look closely at the red flags that social media-addicted children may reveal, which is more than just screen fatigue.
7 Red Flags That Signal Social Media Affects Your Child’s Mental Wellbeing
.
Irritability, Anger, Anxiety, and Depression
Emotional turbulence is often one of the first signs that social networks are impacting a child’s mental well-being. A child who once handled challenges with calm may suddenly snap over minor inconveniences—like being asked to pause their screen time. This shift is more than a passing phase.
Excessive digital platform exposure can condition a kid’s brain to expect instant gratification. Consequently, it gets difficult to tolerate delays or engage in slower-paced activities like reading or studying. The flood of fast, dopamine-triggering content rewires emotional responses, often replacing patience with frustration. As a result, parents might find their child increasingly restless, easily angered, and emotionally unbalanced even outside the screen.
Read more: How to Keep Your Baby Comfortable and Healthy While Using Air Conditioner or Cooler
Losing Track of Time
When children spend long hours online, it’s easy for them to lose a sense of time. What often begins as a quick scroll can spiral into hours of passive consumption, especially on apps designed to encourage endless engagement. This disconnection from time awareness can quietly lead to neglect of daily responsibilities such as homework, family interactions, or personal hygiene.
The 2025 report from Common Sense Media reveals that children under 8 now spend an average of 2 hours and 27 minutes each day engaging with screen-based media. TikTok dominates their screen time with nearly two hours a day, making it the top platform among this age group. These numbers point to a growing trend where time management skills erode as children become immersed in the virtual world.
Social Withdrawal
As children spend more time scrolling through digital feeds, their connection with real-world interactions often begins to fade.
Social psychologist Jonathan Haidt, in his book The Anxious Generation (2024), likens social media to a firehose of addictive content. It displaces physical activity and in-person play—fundamental elements of healthy childhood development.
Read more: Summer Tips for School-going Children
Children using online media for three or more hours a day often avoid eye contact and struggle to express emotions clearly. Moreover, they speak in incomplete sentences during face-to-face interactions.
For instance, a child who once eagerly engaged in family dinners might now retreat to their room, avoiding conversation entirely. This pattern of withdrawal isn’t shyness-—it’s discomfort, shaped by a digital world that rarely demands verbal or emotional expression.
Misguided Self-esteem
Virtual communities often act as distorted mirrors, shaping how children perceive their worth. Constantly exposed to highlight reels of peers’ lives, many begin to question their own value.
According to ElectroIQ's Social Media Mental Health Statistics, 52% of users report feeling worse about their lives after seeing friends’ posts. 43% of teenagers admit feeling pressure to post content, driven by the hope of gaining likes or comments.
Read more: How to Protect Children from Electric Shocks
This chase for validation can have serious consequences. Children may develop body image issues or body dissatisfaction, comparing themselves to edited or filtered content. To gain approval online, they might resort to risky behaviour. For example, a teen might post provocative or reckless videos for attention and digital praise.
Losing Attention in Offline Tasks
Children nowadays are increasingly struggling to stay focused on tasks that require sustained concentration, like reading, studying, or completing chores. SambaRecovery's report highlighted that children’s average attention span is only 29.61 seconds. Over time, this figure showed a significant 27.41% decline during the continuous performance test.
This trend mirrors parental concerns- 79% of parents, as cited by Common Sense Media 2025, fear that heavy screen exposure is eroding their child's ability to concentrate.
This erosion is often visible in daily life. Constant notifications, videos, and scrolling content condition young minds to crave quick bursts of stimulation. It makes slow, offline tasks feel dull and unrewarding. Over time, this affects not just academics but also a child’s overall cognitive stamina and productivity.
Read more: Parenting a Teenager? 10 Tips to be Their Best Friend
Fear Of Missing Out (FOMO)
This is a powerful psychological driver that affects emotional health and can be especially damaging. This feeling stems from the perception that others are enjoying experiences, events, or interactions without them. It's amplified through the constant visibility of others’ lives online.
For example, a kid might see classmates hanging out without him/her, sparking feelings of exclusion, sadness, or even jealousy. These emotions, although silently endured, can create deep emotional turbulence. FOMO intensifies anxiety and self-doubt, fuelling compulsive social network checking as children try to stay “in the loop” at all times.
Increased Secrecy and Refusal to Go Outside
When children begin to maintain excessive secrecy, it’s often a red flag that something deeper is affecting their well-being. If your child has previously been open but suddenly becomes reluctant to share details about their day or their online activities, it could signal emotional distress. Secrecy often indicates that they are hiding something troubling, like exposure to cyberbullying or other online dangers.
According to social media mental health statistics, 87% of teens report being cyberbullied. Notably, 36.4% of girls report being affected by online harassment, compared to 31.4% of boys.
Read more: The Importance of Instilling Leadership Skills in Your Child
This constant exposure to negativity can cause children to avoid going outside, preferring the perceived safety of digital spaces. Over time, this behaviour can lead to a loss of trust and emotional isolation, as children avoid engaging in conversations.
Wrapping Up
These 7 warning signs reflect social media's negative impact on children's mental and emotional health. Excessive screen time can cause them to lose track of time and decrease their attention span, neglecting important tasks and responsibilities. Over time, this often results in social withdrawal. The constant comparison to others online fosters misguided self-esteem and worsens their mental well-being. Furthermore, children may struggle with FOMO, which heightens their feelings of inadequacy. As they struggle with these emotions, many develop increased secrecy, distancing themselves from the real world. All of these factors contribute to heightened emotional distress, often manifesting as irritability, anger, anxiety, and depression.
Read more: Bullying in School: How to Protect Children and Deal with the Issue
2 months ago
Can technology help more sexual assault survivors in South Sudan?
After being gang-raped while collecting firewood, a 28-year-old woman in South Sudan struggled to find medical assistance. Some clinics were closed, others turned her away, and she lacked the money for hospital care.
Five months later, she lay on a mat in a displacement camp in Juba, rubbing her swollen belly. “I felt like no one listened … and now I’m pregnant,” she said. The Associated Press does not identify survivors of sexual assault.
Sexual violence remains a persistent threat for women in South Sudan. Now, an aid group is using technology to locate and support survivors faster. However, low internet access, high illiteracy rates, and concerns over data privacy pose challenges in a country still grappling with instability.
Using Chatbots to Bridge the Gap
Five months ago, IsraAID, an Israeli humanitarian organization, introduced a chatbot on WhatsApp in South Sudan. The system enables staff to document survivors’ accounts anonymously, triggering immediate alerts to social workers who can provide aid within hours.
Rodah Nyaduel, a psychologist with IsraAID, said the technology enhances case management, reducing the risk of misplaced paperwork. “As soon as an incident is recorded, I get a notification with the case details,” she said.
While experts agree that technology can minimize human error, concerns remain about how such data is handled.
“Who has access to this information? Is it shared with law enforcement? Could it cross borders?” asked Gerardo Rodriguez Phillip, a UK-based AI and technology consultant.
Comedian Russell Brand denies allegations of sexual assault published by three UK news organizations
IsraAID insists its system is encrypted, anonymized, and automatically deletes records from staff devices. During the chatbot’s first three months in late 2024, it processed reports of 135 cases.
Barriers to Accessing Help
For the 28-year-old survivor, timely intervention could have changed everything. She knew she had just a few days to take medication to prevent pregnancy and disease, but when she approached an aid group, her details were hastily written on paper, and she was told to return later. When she did, staff were too busy to help. After 72 hours, she gave up. Weeks later, she realized she was pregnant.
IsraAID eventually located her through door-to-door outreach. Initially hesitant about having her information recorded on a phone, she agreed after learning the devices were not personal and that she could hold the organization accountable if issues arose.
She is among thousands still living in displacement camps in Juba, years after a 2018 peace deal ended the country’s civil war. Many fear leaving or have no homes to return to.
Women who venture out for necessities like firewood continue to face the risk of assault. Several women in the camps told the AP they had been raped but lacked access to services, as humanitarian aid has declined and government investment in health remains minimal. Many cannot afford transportation to hospitals.
The Impact of Funding Cuts
The situation has worsened following U.S. President Donald Trump’s recent executive order pausing USAID funding for a 90-day review period. The freeze has forced aid organizations to shut down critical services, including psychological support for sexual violence survivors, affecting tens of thousands.
Can More Tech Solutions Work?
Most humanitarian groups tackling gender-based violence in South Sudan have yet to widely adopt technology. Some organizations believe an ideal app would allow survivors to seek help remotely.
However, stigma surrounding sexual violence makes it difficult for survivors—especially young girls—to seek assistance. Many need permission to leave home, said Mercy Lwambi, gender-based violence lead at the International Rescue Committee.
“They want to talk to someone quickly, without waiting for a face-to-face meeting,” she said.
Yet, South Sudan has one of the world’s lowest mobile and internet penetration rates—less than 25%, according to GSMA, a global network of mobile operators. Even those with phones often lack internet access, and many people are illiterate.
Netrakona govt officer stand released for alleged sexual assault
“You have to ask: Will this work in a low-tech environment? Are people literate? Do they have the right devices? Will they trust it?” said Kirsten Pontalti, a senior associate at the Proteknon Foundation for Innovation and Learning.
Pontalti, who has tested chatbots for sexual health education and child protection, said such tools should include audio features for those with low literacy and remain as simple as possible.
A Desire to Be Heard
Some survivors just want acknowledgment—whether in person or through technology.
A 45-year-old father of 11 waited years before seeking help after being sexually assaulted by his wife, who forced him into sex despite his refusal and concerns about providing for more children.
It took multiple visits by aid workers to his displacement camp before he finally opened up.
“Organizations need to engage more with the community,” he said. “If they hadn’t come, I wouldn’t have spoken out.”
Source: With input from agency
3 months ago