WASHINGTON — The Biden administration’s new cyber strategy calling for minimum security standards across multiple economic sectors looks likely to face opposition from some lawmakers and businesses as U.S. officials work to implement the blueprint.
Committee chairman Rep. Mark Green, R-Tenn., urges the administration to streamline existing regulations.
Top Republicans on the House Homeland Security Committee said in a statement that the administration should be seeking partnerships with the private sector rather than punishment. And at least one private cybersecurity expert warned of potential resistance from sectors that already work under federal regulatory requirements.
By including software makers among those that will have a role in cybersecurity, the administration could also be testing the willingness of that industry to weed out participants that don’t provide adequate security. And by speculating about the possibility of federal insurance for attacks, the strategy is raising questions about the potential for big changes in private sector behavior.
People are also reading…
The strategy, released last week, said the current practice of allowing sectors including utilities, food and agriculture, health care and others to meet voluntary cybersecurity standards had resulted “in inadequate and inconsistent outcomes,” and it prescribed regulations to “level the playing field.”
Reps. Mark E. Green, R-Tenn., the chairman of the House Homeland Security Committee, and Andrew Garbarino, R-N.Y., the chairman of its Cybersecurity and Infrastructure Protection Subcommittee, responded with a statement urging the administration to streamline existing regulations and to favor partnerships rather than punishment in the implementation of the strategy.
“The key to building trust with our private sector partners is employing harmonization across government, rather than encouraging disparate and competing efforts,” Green and Garbarino said. “We must clarify federal cybersecurity roles and responsibilities, not create additional burdens, to minimize confusion and redundancies across the government.”
The administration is making the case that mandated security standards in key sectors are needed after high-profile cyberattacks in late 2020 and 2021 showed voluntary standards aren’t working. An attack on Colonial Pipeline in 2021 shut down supplies of gasoline on the East Coast, and several federal agencies were themselves victims when software supplier SolarWinds was hacked in late 2020.
After the Colonial attack, the administration imposed minimum security standards for operators of pipelines. And similar standards were later extended to airlines and railroads.
The Cybersecurity and Infrastructure Security Agency, or CISA, oversees cybersecurity in the 16 critical sectors that may face new standards. But many of the sectors, including financial services and health care, are overseen by other regulatory bodies, some of which address cybersecurity.
The financial services sector, for example, is one where several regulatory agencies already prescribe cybersecurity requirements and more regulation stemming from the new cybersecurity strategy could face resistance, said Marcus Fowler, CEO of Darktrace Federal, part of U.K.-based Darktrace, a global cybersecurity company.
“I think you’re going to run into business interests and other areas that could erode the bipartisan-ness of cybersecurity when you start to touch on a couple of different sectors,” Fowler said. “I think the one that jumps out to me, which is a critical sector but also one that already has a lot of regulation, is financial services.”
White House officials developing and implementing the strategy acknowledged the need to streamline regulations for some sectors already meeting several cyber standards even as other sectors face few rules.
“We have to raise the bar in some places, we have to harmonize in other places to create a level playing field,” Kemba Walden, acting national cyber director, said last week at an event hosted by the Center for Strategic and International Studies.
Rep. Bennie Thompson, D-Miss., ranking member of the House Homeland Security Committee, said requirements are needed.
“As cyberattacks increase in frequency and sophistication, smart, well-harmonized, performance-based security requirements for critical infrastructure could help ensure the critical infrastructure we rely on every day is sufficiently resilient to keep operating in the wake of a compromise,” he said.
Sen. Gary Peters, chairman of the Senate Homeland Security and Governmental Affairs Committee, said in a statement that he would “closely examine this strategy, quickly consider the parts of it that will require Congressional action.”
Peters, D-Mich., authored legislation that became law in the last Congress that required operators of critical infrastructure to report a cyber attack to federal agencies.
The administration’s cyber strategy also called for shifting liability for insecure software that enables cyberattacks to makers of such software.
“Poor software security greatly increases systemic risk across the digital ecosystem and leaves American citizens bearing the ultimate cost,” the strategy said. “We must begin to shift liability onto those entities that fail to take reasonable precautions to secure their software while recognizing that even the most advanced software security program cannot prevent all vulnerabilities.”
The strategy pointed to software developed by unvetted third parties that is embedded into commonly used programs, potentially allowing hackers to exploit flaws.
Well-established software companies that sell to commercial enterprises “do take security seriously and invest heavily in it,” said Henry Young, policy director at BSA-The Software Alliance, a trade group that represents companies including IBM, Microsoft, Salesforce and others.
Shifting liability to companies for making software with poor security features may help the industry overall by curbing “fly-by-night operators” who are not interested in long-term market presence, Young said.
The administration also is exploring a federal insurance backstop to aid victims of cyberattacks after “catastrophic cyber events,” the strategy said, adding that officials will consult with lawmakers, state regulators, and the insurance industry on how to design such a backstop.
History of online security, from CAPTCHA to multifactor authentication
History of online security, from CAPTCHA to multifactor authentication
As more people have been moving their office work to remote computers, trying to hold secure meetings over technologies like Zoom from home or coffee shops is increasingly common. While some criminal activities like skimming your credit card at gas pumps may be falling out of fashion as fewer people commute every day, other activities, such as classic hacking, can thrive as long as people are using their computers to work remotely, opening new opportunities for hackers. In the past five years, there have been more than 2.76 million complaints to the FBI regarding various cybercrimes, including identity theft, extortion, and phishing, with losses exceeding $6.9 billion, according to 2021 data from the FBI.
With security top of mind, Beyond Identity collected information from think tanks, news reports, and industry professionals to understand landmark moments in internet security over the past 50 years. The internet began as a classified government program to connect different important military and government facilities. The first outside users were from universities, where very smart people have long been inventing new ways to poke holes in the internet as a form of preventive research.
From the first antivirus program in the 1970s to the zero-trust protocols of today, security has evolved over the years as developers strive to stay one step ahead of hackers.

1970s: Antivirus software
A computer virus is a piece of software the user typically downloads when they click on an infected email attachment or another file. The first virus was a 1970s program called Creeper, which was designed to crawl the early internet known as ARPANET, according to a report from Cyber Magazine. Like modern penetration testers, researchers wanted to see how they could hypothetically invade their own system. In response, email inventor Ray Tomlinson wrote a program he named Reaper, which chased and destroyed Creeper. That makes Reaper the first-ever antivirus program, creating a genre that endures today.
1970s: Encryption
Cryptography is the blanket term for the field of mathematics and security that involves setting codes and encoding information for safe transit. Encryption simply means applying a cryptographic algorithm to a piece of information. With computers, one of the first examples of network encryption came from IBM in the early 1970s. The first standard encryption algorithm, known as the data encryption standard, lasted for more than 20 years before computer calculations finally broke it. Today, researchers race to keep their mathematics ahead of those who are trying to use the same computing power to break the algorithms.
2000s: CAPTCHA
In the late 1990s, the internet was rapidly growing in popularity, with intrusive technology like cookies and viruses rapidly following. People realized they could use bots, or automated processes, to post spam comments on websites at a massive scale, for example.
Researchers at Carnegie Mellon University invented CAPTCHA in 2000 as a way to combat those bots. Computer programs struggle with many tasks humans do almost without thinking, especially tasks that involve processing visual information. CAPTCHA is now considered deprecated in most usages, but it paved the way for other forms of security like the popular “Which of these pictures shows a motorcycle?” CAPTCHAs that are still used today.
2000s: Multifactor authentication
Multifactor (or two-factor) authentication is a form of login technology that asks users to offer a second, corroborative piece of information along with their simple username and password. This may come as a text message or through an app like Google Authenticator. While this technology dates back to the 1980s, it was first introduced to consumers in the 2000s when it rolled out to banks. The New York Times reported on the rise of two-factor authentication in 2004, a time when many Americans didn’t even have broadband internet yet.
2010s: Zero trust
If you’ve read this far, you may be starting to feel like no piece of data is ever safe. You’re not alone. Computer security is deeply complex and ever-changing because of the equal pace at which criminals and other bad actors are following new forms of intrusion. One of the latest paradigms is that of zero trust, a term that means doing away with previous ideas like “trusted devices.” This means always verifying security information on each device trying to access a network. Users would only be allowed access to data and information needed to complete a request.
This story originally appeared on Beyond Identity and was produced and distributed in partnership with Stacker Studio.

