Technology

Realistic GenAI Cybersecurity Regulation: Don’t Give Your Data Covid

The GenAI revolution needs a global response. This tech creates text and images, holding immense potential but raising major cybersecurity concerns. Lack of understanding makes regulation challenging. GenAI can create cyber threats faster than our defenses can adapt. A "clean room" approach that isolates and destroys GenAI after use to prevent misuse would make sense. Effective combatting of cybercrime requires international law enforcement cooperation.
By
hacker

Big data personal information safety technology concept with noface hacker working with laptop at digital virtual space with statistic indicators background. Double exposure © Golden Dayz / shutterstock.com

October 16, 2023 21:49 EDT
Print

Generative AI (GenAI) — capable of producing text, images or other outputs — has a tremendous potential to benefit us. But it can have some nasty side effects, too. These powerful tools can make cybercrime much easier.

There have been many calls for government regulation to help control the risks created by GenAI. For instance, some authors have suggested the implementation of differentiated regulations, focusing on high-risk applications. Other authors have speculated that already existing regulations, like those outlined by the European Commission’s High-Level Expert Group on Artificial Intelligence, can address the issues created by GenAI. Unfortunately, few understand the underlying technology. Most of these calls have been unrealistic, technically unfeasible and generally unhelpful.

We hope to provide a helpful and realistic regulatory approach to using GenAI for testing and training. The intent is to start a discussion amongst public policy and regulatory people worldwide.

What makes GenAI risky?

GenAI is a very powerful new technology that is different from previous versions of AI in fundamental ways. The sets of data that GenAI is trained on encompass most of the information in the world. This enables GenAI to create new things. So, it can create things that are unexpected by humans. It is posing a very serious cybersecurity threat that the West is not currently prepared to deal with. The threat comes from GenAI’s ability to create new types of attacks faster than existing cyber defensive tools can adapt to meet them.

GenAI’s vast global impact is a digital version of Covid. In the early stages of the pandemic, we had no controls and little natural immunity. The disease spread like wildfire. GenAI, too, could compromise the security of computer systems worldwide. And, just like with a Covid infection, it could infect your data and cause damage before you even knew about it.

Bad actors have taken early leadership in using GenAI to fundamentally change the cybersecurity attack space. The bad actors will do everything possible to improve GenAI systems for creating attacks. They will do this by subverting attempted controls on public systems and creating private systems stripped of all controls.

This is a real and imminent severe threat. Even with GenAI still in its infancy, cybercrime is an enormous, industrial-scale threat, costing the world trillions of dollars. If GenAI were to turbo-charge cybercrime, this could dramatically degrade our digital infrastructure and thus our quality of life.

How can we combat the cybersecurity threat?

Perhaps the first instinct that we might have is to say, “If criminals are going to use GenAI to figure out how to breach our security systems, then we should use GenAI to figure out how to make our security systems stronger.” Unfortunately, things aren’t so simple.

Experts warn against using GenAI to experiment and test mitigation tools. Unless done in a sterile cyber “clean room,” testing your security tools in the open will make the GenAI system better at overcoming them and more likely to create nasty side effects. In the cybersecurity space, this creates a dilemma. How can we test and train effectively without using GenAI?

The question is: How do we test and improve our defenses without increasing the strength and ease of GenAI systems to create attacks?

There is a large body of published material documenting the capability of GenAI systems to do bad things. In some, the authors recommend that using GenAI systems to test around these bad things shouldn’t be done. It shouldn’t be done, because in so doing, the GenAI systems will be trained to do bad things better and to develop easier ways to circumvent controls GenAI vendors use to try to prevent bad things (often called “ethical controls”).

Here is a simplified explanation of an approach to create such a “clean room” to safely improve our defenses. (The approach described here is targeted at a specific GenAI cybersecurity problem. There are other types of nasty side effects. This specific approach may also work for some of them; for others, something different may be needed. But, the key will always be a good technical understanding of both the nature of GenAI and of the side effects.)

The Clean Room approach to cybersecurity testing and training is based on two basic principles:

  1. Place the GenAI system in an environment where it cannot communicate with the outside world.
  1. Destroy the GenAI system after it is used (similar to what is done with viruses in bio labs).

To achieve the first principle, this system must run on isolated, dedicated hardware — sometimes called “air-gapped” hardware. This hardware is physically disconnected from any other devices or signals that could connect it to the Internet. This would prevent it from spreading any harmful effects it may develop into the wider world.

Containing Generative AI Systems for Ethical Test and Training. Authors’ original image.

The second principle is intended to remove the threat of the GenAI system learning to become better at attacks. A system trained against security devices would be contaminated with the ability to overcome these devices, and could therefore not be safely used again. So, it needs to be destroyed.

The key question here is how to do so economically. GenAI systems are expensive. They take a long time and a lot of effort to train. It is difficult to convince people to throw all that investment away.

Once a GenAI system has been created, however, it can be cloned. Cloning takes effort, but nothing like the effort required to develop and train a system. Thus, for a particular lab test or training session, an inventory of GenAI clones can be created and, at the end of the test or training session, be destroyed.

We need to regulate the GenAI development space

Just like testing for and preventing the spread of a disease like Covid, it may seem obvious that we need to be careful with GenAI. But many well-intentioned people don’t think before they act. And not everyone is well-intentioned. Some of those who know better will fail to do the ethical thing in an effort to control costs and improve profitability. Because of this, we cannot expect everyone to implement ethical GenAI development by themselves. Regulation is necessary to mandate safe practices.

For example, in one recent case, a distinguished Stanford computing professor speaking in a seminar proudly described how he used a well-known public GenAI system to find vulnerabilities. He was writing code for a user authentication system (a system to check user credentials such as user ID and password before allowing access). He was showing the code to the GenAI system and asking it to find ways of successfully attacking it. It never occurred to him that he trained the GenAI system to be a better attacker.

There is such a flood of research being published, that it is not possible for even someone whose job directly involves it to actually read it all. And then, there is the problem of thinking about the practical implications.

If an academic researcher has this problem,what about busy professionals in the field? They do not have easy access to the research, and maybe no time to follow it. Before testing systems or training staff, will they think about the ethical use implications?

Then, there are those who will think, “I have to get a product in the field quickly at the lowest cost and make a profit. All this ethical stuff is true. But, it just gets in the way of me making money. So, I will ignore it.”

This is where regulation comes in. Simple, clear regulations with easy-to-understand procedures for implementation and enforcement will remind the folks who don’t think and change the cost/profit equation for the folks who don’t care.

The role of law enforcement

But government regulation, by itself, is not enough. Regulation, if implemented in a consistent form internationally will be effective with legally responsible organizations and individuals. But not with those who seek profit through crime. For those, law enforcement will be required.

Currently, there are two GenAI systems on the dark web that are providing cybersecurity attack services in exchange for cryptocurrency. There are also likely to be similar GenAI systems running in rogue nations. Regulation won’t change the behavior of the people and organizations behind these. Only law enforcement will.

Because cybercrime works across borders, an effective international cybercrime law enforcement effort is required. Such an effort should be designed to directly mitigate cybersecurity problems in participating states and indirectly mitigate problems created by rogue states.

The situation today is comparable to something we faced nearly a century ago. In the 1930s, V8 Fords appeared on the US market. Up until this time, bank robbery had been local, but the new Fords made it possible for robbers to quickly escape state lines, thereby avoiding local law enforcement. The only way to defend was to create a defense on the national scale. That was the FBI, a federal law enforcement agency. Today we face attackers crossing national boundaries. It is difficult to imagine that we could have a law enforcement agency with global jurisdiction. But, at the least, law enforcement agencies across the world must coordinate with one another.

A first step might be the creation of an international forum for cooperation. The UN is currently working on creating an AI study group. This could be the seed for such an international forum to grow out of. Such international efforts take time. So, in the meantime, existing law enforcement organizations and regulatory organizations should gear up to do the best they can in controlling both the law-abiding and the criminal GenAI systems.

Other examples of GenAI harmful activities include spear-phishing, spreading deep fakes, low-entry barriers for malicious actors, enabling cyberattacks, and a lack of social understanding that can lead to inappropriate advice, amongst others. It is our hope that other regulatory approaches based on a similar sound technical understanding of GenAI capabilities and problems can be developed.

[Anton Schauble edited this piece.]

The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.

Comment

Only Fair Observer members can comment. Please login to comment.

Leave a comment

Support Fair Observer

We rely on your support for our independence, diversity and quality.

For more than 10 years, Fair Observer has been free, fair and independent. No billionaire owns us, no advertisers control us. We are a reader-supported nonprofit. Unlike many other publications, we keep our content free for readers regardless of where they live or whether they can afford to pay. We have no paywalls and no ads.

In the post-truth era of fake news, echo chambers and filter bubbles, we publish a plurality of perspectives from around the world. Anyone can publish with us, but everyone goes through a rigorous editorial process. So, you get fact-checked, well-reasoned content instead of noise.

We publish 2,500+ voices from 90+ countries. We also conduct education and training programs on subjects ranging from digital media and journalism to writing and critical thinking. This doesn’t come cheap. Servers, editors, trainers and web developers cost money.
Please consider supporting us on a regular basis as a recurring donor or a sustaining member.

Will you support FO’s journalism?

We rely on your support for our independence, diversity and quality.

Donation Cycle

Donation Amount

The IRS recognizes Fair Observer as a section 501(c)(3) registered public charity (EIN: 46-4070943), enabling you to claim a tax deduction.

Make Sense of the World

Unique Insights from 2,500+ Contributors in 90+ Countries

Support Fair Observer

Support Fair Observer by becoming a sustaining member

Become a Member