Back in 2015, the internet was still mostly unencrypted. On Android, only about 30 % of page loads used HTTPS. Even by late summer, that number had barely passed 40%. Desktop stats looked the same. Most websites are loaded over plain HTTP, meaning anyone on the same Wi-Fi or network path could see what you were doing. It wasn’t until early 2017 that HTTPS finally crossed the 50% mark, and that milestone made headlines.
Fast-forward to 2025, and the shift is dramatic. Over 95% of page loads in Chrome on Windows now use HTTPS. That number holds steady, between 92% and 97% across mobile and desktop platforms. A big part of this success comes from Let’s Encrypt, the nonprofit that gives out free SSL certificates. As of 2025, it helps secure over 600 million websites, signing around 7 million new certificates daily.
Browser makers also played a significant role:
In just a decade, encrypted connections have gone from rare to routine. That’s a win for privacy. But it also gave us a false sense of safety, because the padlock in your browser doesn’t always mean the site behind it is safe.
Here’s the problem: as HTTPS adoption surged, so did a common and dangerous misunderstanding. Many people believe that the padlock icon in their browser means a website is safe, but that’s not what it means at all.
A long-term Google study proved just how widespread the confusion is. Out of 1,880 participants, only 11 % knew what the padlock stands for. In other words, nearly 9 out of 10 people wrongly assumed the padlock means the site itself is verified and trustworthy. It doesn’t.
Cybercriminals quickly learned to take advantage of this trust gap. By early 2021, 83 % of phishing sites were already using valid HTTPS certificates, and this number has stayed high ever since. Why? Because it’s easy.
Remember this: Free domain-validation (DV) certificates, like those from Let’s Encrypt, confirm that someone controls a domain, not who they are or what they’re doing.
That’s how a fake site like paypa1.com can show the same secure padlock as the real PayPal. The encryption keeps your connection private, but it can’t tell if the site is a scam. So while the browser padlock protects the data on the way in, it won’t stop you from handing it to the wrong people.
Let’s clear this up: an SSL or TLS certificate doesn’t mean a website is safe. It only confirms two things:
That’s it. The certificate says nothing about who owns the site or whether it’s trustworthy. It’s like locking a package but not knowing who’s opening it on the other end.
Even the fancier certificates, like Extended Validation (EV), which used to show company names in green, are now treated like any other. Chrome dropped EV labels in 2019, followed by Firefox shortly after. Studies showed these indicators confused users more than they helped.
Most websites today use DV certificates because they’re easy and often free. Let’s Encrypt alone and issue them to hundreds of millions of sites. There’s no background check. You get the lock icon if you can prove you control the domain.
Unfortunately, attackers can (and do) use the duplicate certificates. In early 2021, PhishLabs and RiskIQ found that 83% of phishing sites used valid HTTPS. That hasn’t changed much since. Nearly 5 million phishing sites were tracked in 2023, many likely using HTTPS just like legitimate sites.
So yes, HTTPS protects your data on the way to the website, but it won’t protect you from fake login pages, malware, or scams once you’re there. Encryption stops eavesdropping, but it doesn’t stop deception. That part still depends on smart browsing and good browser defenses.
Back in the day, HTTPS was a sign that a site was secure, used mainly by banks and e-commerce. Today, attackers embrace it. In 2024, Zscaler’s ThreatLabz found that nearly half of all malicious websites they analyzed used SSL certificates from Let’s Encrypt, a free and automated certificate authority. HTTPS has become a tool for attackers, not a barrier.
SSL certificates used to require manual setup and cost money. But since Let’s Encrypt launched in 2016, attackers can now script the whole process. In 2024 data:
Attackers often use lookalike domains like secure-paypa1.com, replacing characters with similar ones. According to Tripwire’s 2023 Domain Impersonation Report, over a million fake domains were registered in just six months — many lying dormant until they’re used in phishing campaigns. The delay helps them avoid early detection by security tools.
These domains are then activated, with HTTPS enabled, to host cloned login pages. Since users see a secure connection (padlock icon), they assume the site is trustworthy. This strategy makes phishing sites blend in. A phishing ring tracked by Palo Alto Networks spun up over 10,000 fake domains in just three months, all with HTTPS active from day one.
HTTPS allows these phishing sites to look secure even in browsers that normally flag suspicious pages. On mobile devices, where users are less likely to inspect URLs or security details, the presence of HTTPS eliminates most warnings and gives the site visual legitimacy.
For years, the padlock has been misunderstood by the public as a sign that a site is “safe.” Attackers use this perception against users. A 2025 Keepnet Labs study showed that new employees were 44% more likely to click phishing links in their first 90 days — especially when reassured by that familiar padlock. It’s not about technical security anymore; it’s psychological.
Browsers are moving away from the padlock icon because users keep misunderstanding it. Google made here the first significant change in 2023. With Chrome version 117, the classic padlock was replaced by a more neutral “tune” icon. By early 2024, the new icon had rolled out widely, appearing on over two-thirds of desktop Chrome pages. Google’s explanation was direct: the padlock doesn’t signal trustworthiness, so leaving it there just spreads confusion.
Firefox has taken a similar, slower path:
Are users catching on? Not really. A Google survey of 1,880 people found that 89% held wrong ideas about the padlock. A UK study from 2024 showed that only 7% truly understood what it meant, and just 5% could explain it correctly without help. That’s why both Chrome and Firefox are redesigning how they signal security.
However, both browsers encourage users to click to see what’s happening under the hood. Chrome’s tune icon reveals permissions, cookies, and certificate details, and Firefox does the same in its Site Info panel. The idea is simple: clicking forces people to think, and when users think, they’re less likely to trust blindly.
By 2025, the message is clear: encryption is now the norm. You don’t need flashy icons to prove it. Instead, browsers are shifting toward subtle alerts that only show when something’s wrong, because no signal is better than a misleading one.
Today, most of the web is finally encrypted, and that’s a big step forward. But don’t mistake the padlock icon for a seal of approval. It was never meant to prove a site is honest, only that the connection is secure.
Attackers know this, which is why phishing sites and malware pages often use HTTPS. They dress up their traps in the same shiny encryption your bank uses. And now, browsers have started to move on, retiring from the padlock that gave users a false sense of safety for years.
So here’s the takeaway: encryption is just the starting line.
To stay safe:
In 2025, the smartest security habit isn’t trusting the padlock – it’s trusting your instincts.
I hold a PhD in Computer Science and Electrical Engineering and currently serve as an associate professor at the Faculty of Electronic Engineering. My academic background and professional experience have provided me with expertise in Data Science and Intelligent Control, which I actively share with students through teaching and mentorship. As the Chief of the Laboratory for Intelligent Control, I lead research in modern industrial process automation and autonomous driving systems.
My primary research interests focus on the application of artificial intelligence in real-world environments, particularly the integration of AI with industrial processes and the use of reinforcement learning for autonomous systems. I also have practical experience working with large language models (LLMs), applying them in various research, educational, and content generation tasks. Additionally, I maintain a growing interest in AI security and governance, especially as these areas become increasingly critical for the safe and ethical deployment of intelligent systems.