01 Jan What is the Online Safety Act?
Government lays out plans to protect users online
Access to the internet has been a lifesaver for many people throughout the coronavirus epidemic, offering a quick way to find information and stay in touch with loved ones.
However, while the internet may be a valuable resource, it can also be misused. According to cybersecurity firm Kapersky, 84% of parents are concerned about their child’s internet safety, while statistics from the Office for National Statistics reveals that 764,000 youngsters have experienced online bullying.
In a world where we are becoming increasingly dependent on all things digital, new government legislation is being developed to make the internet safer for children and vulnerable individuals.
What is the Online Harms Bill?
The Online Harms Bill will include new online safety regulations to assist keep people safe when using the internet. This will mean:
- All in-scope firms must take steps to combat illegal conduct that endangers children’s safety, in addition to preventing minors from accessing inappropriate content and implementing robust anti-cyberbullying safeguards.
- Adults should be far less likely to come across unlawful content online. If they do, it should be simple to report the company in question, which would have to act swiftly to remove it.
How will it be enforced?
Ofcom will publish codes of practise explaining the systems and processes that businesses must use to be compliant.
Because of the seriousness of the challenges, the government has previously published codes on terrorism and child sexual exploitation.
The largest and most popular social-media sites, referred to as category-one services, will be required to clearly explain in their terms and conditions how they will handle so-called legal harms.
If businesses fail to comply with the new regulations, Ofcom will be able to levy fines of up to £18 million or 10% of worldwide sales, whichever is greater.
According to the government, it will also be able to block access to sites in the UK.
Which companies will this affect?
The laws will apply to companies that host user-generated content. This may include pictures, videos and commentary or be a website enabling UK users to chat online to other individuals via messages, comments and forums. This means that the new rule will encompass popular social media firms such as Facebook, Twitter and Instagram.
Does it go far enough?
Some campaigners do not think so.
Belinda Parmar, a tech specialist, said: “There’s a huge ambition. The regulator will have teeth, but no code of conduct has been issued for non-legal harm, and the whole thing is quite ambiguous.”
Others say that fines do not go as far as to make the senior management of technology corporations legally accountable for harmful content, including the NSPCC.
Labour called the proposals “watered down and incomplete” and said the new rules would do “very little” to ensure children are safe online.
The government has reserved the powers for Ofcom to pursue criminal action “if tech companies fail to live up to their new responsibilities”. A review will take place two years after the new regime is operational.
Ian Russell, father of Molly, who killed herself in 2017 after viewing thousands of posts about suicide and self-harm on social media, said he hoped the new law would “focus the minds of the tech platforms” to change their corporate culture and to reduce online harms.
How ATP Enable can help?
Speak with one of our experienced team members to learn more about the latest deals from a range of secure mobile network providers.
Call 03333 58 39 38, or Email email@example.com