• Corporate
  • May 12, 2021
  • By Jason Crabtree

AI Rules of the Road

AI Rules of the Road

The National Security Commission on Artificial Intelligence released its final report about halfway into the first 100 days of President Biden’s administration. Biden’s national security advisor, Jake Sullivan, tweeted after reading the report that “the U.S. and its allies must continue to lead in AI, microelectronics, biotech, and other emerging tech to make sure that these technologies are safe, secure and beneficial to free societies.”

The report was written under the anxiety of China’s influence and its effort to either write the first draft of global rules or thwart their writing by the United States and other democracies. On the heels of the report, several U.S. corporations with heavy investments in AI released their glossies, taking the report as a call to arms.  Companies and agencies all want to get to work quickly.

But velocity and activity in AI should not be conflated with progress in AI.  We think AI can be best reasoned about from a policy perspective through the lens of an emerging framework for automating a host of techno-political and social interventions. We, therefore, shoulder the responsibility to incorporate within our design of that framework a distinct set of values that operationalize the goods or virtues we want AI - however general or more likely narrow in the near-term - to help us achieve.

To wit: AI encompasses many technologies that can be useful tools for informing or even automating decision making, but training data, as well as heuristic selection and curation, can introduce dangerous bias: algorithms that discriminate, that echo unconscious cognitive errors in processing, that discount divergence, and that manipulate the choice and attention architecture of human beings without their consent and with a potentially deleterious impact upon their well-being.

Based on the present state of research, we don’t believe that we’re on the cusp of a consciousness-like “general AI” that can optimize itself. In fact, we advocate for how “use case” driven approaches will win. Most near-term AI progress will likely remain in narrowly tailored and domain-specific applications.  But we cannot develop one without a broader popular and intellectual framing around AI that we feel comfortable applying to almost any case.  Since we’re in a liminal period, one where we can write the rules that will last a generation or more now is the time to prime AI use (as opposed to research and development) with ethical standards in the more general sense.  Use cases that aim to provide humans with a better advertising experience can be hijacked to provide them with manipulated information that harms their health.  Use cases that use AI and ML to structure data to give the police more efficient ways to map the spread of gangs in the city – but which do not factor human and institutional prejudices into them – can (and do) reinforce forced pattern-matching, where the human responsible for adding data will exceed his or her knowledge.  We have to decide on a future of replacement and subservience or a contrasting view focused on enablement and ennoblement.

This is why data anonymization and masking of personal and location data, at the heart of most privacy discussions, are both necessary and insufficient.  Anonymizing data may (partially) protect our privacy, but it assumes that we’ve lost control over how we want it to be used.  Ideally, we would inform everyone in advance that something of value they produced (wittingly or unwittingly) was about to be used in an AI, and that it would be anonymized to complete or nourish a data set, and that the use case would generate value, profit, efficiency for the creator.  

We would approach the field from a different perspective. Build a values case first.  Start from this point: since human beings have become, in the age of surveillance capitalism and digital enterprises, constant producers of value, we believe that human beings ought to have more agency over the value they create. At a minimum, they ought to know what their data is being used for, why it is being used, and whether there is a practicable way for them to control, influence or altogether opt out of its use.  In the way that environmental impact statements can inform development decisions and prevent community harm, and perhaps borrowing the framework from required privacy impact mitigation policies that government agencies implement, those who deploy AI technologies that are designed to exploit heuristics should be challenged to explain, in plain words, what they want to manipulate, why they want to manipulate it, what potential harm it might cause in the context of its deployment and in the deployment’s context within the particular ecosystem.   Of course, data scientists might not be able to foresee the answer, but if they practice answering this question, they will develop an ethical intuition that can apply to every other data set or algorithm they produce.

The NSCAI report, in a slightly different context, suggests that we deploy AI to build digital resilience against those who would transgress ethical norms, noting that “Digital dependence in all walks of life is transforming personal and commercial vulnerabilities into potential national security weaknesses. Adversaries are using AI systems to enhance disinformation campaigns and cyber-attacks. They are harvesting data on Americans to build profiles of their beliefs, behavior, and biological makeup for tailored attempts to manipulate or coerce individuals.”  As we saw with misinformation, though, the greatest harm can be indigenous.

If a company’s sole purpose is to maximize the growth of its particular engagement or efficiency metric, its AI technology selection will reflect that choice, no matter how many post-hoc adjustments one tries to make for runaway unintended consequences.  If, on the other hand, when a company intends to contribute to the common good, it will have to determine, among other things, how much of its own proprietary software and store of knowledge it will reveal in the process of engaging the public more broadly. This is a hard problem, to be sure, but it is the right type of problem. It demands explicit discussion and treatment.

The public can be cued to pressure companies into acknowledging the balance between human equities, corporate equities and national security equities – and even into acknowledging, and surfacing, those areas where the equities do not overlap.  To this we would add a fourth equity that informs our work: social equity. Ideas sprint. Technology marches. Regulation drags.  Externalities mushroom.  Since we know that virtually every major technological advance in the Anthropocene has both increased the average well-being of a human and increased the relative inequality between groups, we should try and ensure that AI design reflects our desire to reduce relative inequality, if not of wealth then of opportunity.  We must apply this lens and work to ensure that our use of AI does not just exacerbate wealth inequality, disparities in the justice and policing, unequal access to truthful and relevant information, or erect new barriers to political participation and enfranchisement.  If not checked and aligned with more positive goals, AI will contribute to uncertainty and unrest instead of to growth and shared prosperity and abundance.

It might be said that if the national security sector in the United States obsesses about developing ethical and equitable AI projects, we sacrifice field and resources to the adversary, which consciously uses AI to directly discriminate, to surveil and control its citizens, and to expose and suppress dissent. Further, maybe the average American is so inured to meta-debates about privacy, data and surveillance that another imposed on them by elites is destined to be politicized and therefore meaningless.  The first concern can be dispensed with: if America is behind on AI, it is behind because it has paid insufficient attention to developing a culture where AI can flourish within the norms of evolving American political culture and values.  Second: when a major subject of debate falls into a deep political disrepute, it is usually because its gatekeepers refused to tell the truth early enough about its aims, intentions and assumptions; secrecy and opacity are catnip for demagogues. But we have a conscious choice: we can do the hard thinking and communicating now, allowing the public’s values to influence our decisions, or we can fill endless tomes explaining what went wrong later.

You might also be interested in

Empowering enterprises to stay ahead of evolving threats

Empowering enterprises to stay ahead of evolving threats

QOMPLX recently joined the IBM Security App Exchange. Here’s why the integration will take your security to the next level.

Read more
Identify and Fight the Phish #CyberMonth

Identify and Fight the Phish #CyberMonth

Phishing attacks are an easy way for a bad actor to gain access to a network. Once inside, they can cause devastating losses.

Read more
How much automation?

How much automation?

Automation of underwriting decisions has a very tangible benefit - cost savings. When rules are automated and decisions are made based on reliable supporting data, underwriters can focus on the outliers and make the most of their precious time.

Read more
Request a Demo

Interested in learning more?

Subscribe today to stay informed and get regular updates from QOMPLX.