inforgraphic: 20/20 predictionsOn February 24, 2020, the Pentagon outlined five principles to ensure ethical use of Artificial Intelligence (AI) in the Department of Defense (DOD). The new guidelines call for AI use that is:

1. Responsible

DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.

2. Equitable

The Department will take deliberate steps to minimize unintended bias in AI capabilities.

3. Traceable

The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.

4. Reliable

The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycles.

5. Governable

The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior

Could AI principles outlined by the Pentagon be a model for future government AI regulation? 

Some might argue that government regulations for AI could negatively impact AI development because intellectual properties would be compromised. Or, if regulations require AI providers to implement additional code, it could delay or impact AI innovation and development. 

AI continues to mature and expand causing some people to be concerned about future implications. Some people believe that if AI is not developed responsibly, it could rebel against humans in the future. Is this possible?

Lets explore the possibilities, then discuss the idea of government regulation.

Imagine a weaponized drone with AI decision-making technology. This type of autonomous weapon system could help the military identify, stalk, and kill an enemy target without any human intervention, providing them with a powerful advantage over their adversary.

Now imagine if that drone were compromised or if it malfunctioned, causing it to target friendly” targets, such as humans. To regain control, AI developers might need to hack” back into that drone to correct the problem.  However, any attempt to hack into the drone could be viewed, by the drone, as an emanate threat, triggering the drone to attack the hackers.

Although this scenario is extreme and sounds like it comes from a science fiction movie, the reality is that in the future, it will be possible for humans to lose control of autonomous systems.  AI is a technology that has the ability to learn, write code, operate a vehicle, make predictions, and make decisions.

AI is also a technology that can produce a result or an outcome that developers did not anticipate. Therefore, we will most likely see government regulations in the future that ensure safety measures are implemented, that encourage non-bias AI development, and that protect privacy.  

New Pentagon AI principles could be a model for government lawmakers to use if AI regulations are to be imposed.

We already have consumer privacy protection laws, so why do we need specific regulations for AI to protect consumer privacy?

AI will enhance market research technology providing much more insight into a person than is available today. AI solutions will provide insight into a persons habits, emotions, moods, and tendencies. It will make accurate predictions about the decisions an individual will probably make in the future.

Market research discovery applications powered by AI technology will give advertisers enormous amounts of consumer information, allowing them to create targeted marketing campaigns to an individual. However, if not regulated, AI with big data discovery capabilities might collect data about individuals that could be considered personal and private.

AI guidelines outlined by the Pentagon require that DOD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.”

To protect privacy, government regulations would need to include similar guidelines to ensure responsible” AI development, ensuring that AI developers exercise appropriate levels of care when writing AI algorithms that are intended to discover personal or private information.

Why would the government be concerned about unintended bias in AI development? 

Developers can unintentionally enter bias into AI algorithms based on their technical limitations, culture, gender, race, sexual orientation, expectations, or religious beliefs. Remember, AI has the ability to learn, which means that humans will be able to train AI solutions by supplying the data the systems need. Therefore, it will be important to ensure that the data supplied to AI learning algorithms is not tainted with human biases.

For example, today there are solutions available that perform background checks for potential employees, or to verify if someone can pay back a loan. When powered with AI, these solutions will go much further, providing insight into a persons daily habits, their beliefs, their priorities, and their life goals. AI powered solutions will be able to predict, with greater accuracy, whether or not a person will be a good employee, or if a person will pay back a loan.

AI powered solutions will analyze the data it collects, then provide recommendations to potential employers or creditors. These solutions will become trusted advisors to their owners much like a human” trusted advisor, essentially giving decision making powers about candidates to a computer. However, solutions developed with AI technology could inherit unintended bias algorithms from their developers, skewing results to favor a specific race, religion, or sex.

When AI technology is used to provide recommendations about a potential employee, or if someone should be granted a loan, it will be important to have an understanding about how the results were obtained. Stakeholders need to know the data considered for the research, and how that data was weighted” to get  a final result.

To minimize unintended bias in AI, Pentagon guidelines require that AI be equitable.” This means that DOD will take deliberate steps to minimize unintended bias in AI capabilities. Its probable that similar guidelines will be defined if government AI regulation is implemented.

Furthermore, AI solutions performing such tasks will need to be traceable,” which means that these AI solutions will have to be auditable and transparent about the data sources used to reach a conclusion.

How would government regulations ensure safety in AI development?

AI will make great strides in technology used in transportation, such as self-driving cars. It will be important that AI technology used in autonomous vehicles is reliable.”

To ensure safety, we should anticipate government regulations require AI algorithms be properly tested before they are placed into operation.

Government safety regulations for AI could be similar to Pentagon guidelines.“AI will have explicit, well-defined uses and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycle.”

Future government regulations could also require that AI algorithms are designed to give access to third parties or government authorities so that data and code can be dissected and analyzed when an autonomous vehicle is involved in an accident. However, such regulations could infringe on the intellectual properties of AI developers.

How could government regulations be implemented to protect intellectual properties while at the same time, protect consumers?

Regulation could negatively impact AI Development if intellectual properties are not protected. Government law makers need to find a way to protect consumers while protecting the intellectual properties of companies that provide AI technology.

If government regulations require that AI developers implement additional code so that solutions are traceable, it could delay or impact AI innovation and development. 

Government regulations that could mitigate the impact to AI innovation and development might look as follows:

  • For AI technology that does not raise safety concerns, government regulators might consider obtaining information through statistics, trends, and surveys as a way to protect both consumers and AI intellectual properties.
  • For AI technology that does raise safety concerns, such as self-operating vehicles, government regulations could require the manufacturers provide experts to assist government officials investigating a vehicle that has been involved in an accident. They would be required to help officials access, dissect, and analyze data to determine the cause of that accident. 

AI regulations will most likely require that manufacturers provide a kill” switch for any type of vehicle controlled with AI technology.

In a report titled “Predicts 2019: AI and the Future of Work,” Gartner found that “AI-enabled decision support is the greatest contributor to business value creation, overshadowing AI process automation throughout the entire forecast period of 2015 through 2025, globally.”

Artificial Intelligence (AI) is unlike any technology we have seen in the past. Much like a child, AI technology is maturing and “growing up.” Soon, it will go out on its own to make decisions with minimal guidance, therefore, we will most likely see government AI regulations in the future.

AI principles outlined by the Pentagon would be an excellent model for government AI regulation. These principles would promote responsible AI innovation, protect consumer privacy, ensure fairness with non-bias AI solutions, and ensure consumer safety while protecting AI intellectual properties.