AI Ethics IV
Creating a Code of Ethics for Artificial Intelligence
For more than 50 years, scientists, philosophers, and mathematicians have been exploring and perfecting advances in artificial intelligence (AI) while the average person was in the dark about the details and actual capabilities during this time. It wasn’t until IBM’s Deep Blue computer defeated world chess champion Garry Kasparov in 1997 that we could begin to relate to this incredible new branch of science.
The next generation experienced a similar moment 18 years later when IBM’s Watson defeated two Jeopardy champions.
And now, a decade further still, AI is all around us.
Most of us can’t even distinguish applications of AI from “normal” life. AI powers our social media and news feeds. It’s given us speech and biometric recognition along with verbal computer communication. Language-based interfaces answer our questions and give us directions to where we’re going.
Simply put, but amazing to consider, AI simulates and mimics human thought processes and actions to perform tasks, forming conclusions in a way that a human might.
So when it comes to machines that think and act like a person … what could go wrong with that?
We naturally don’t trust things we don’t understand. And if we can’t understand things like AI fully, we want to be sure that someone we trust is watching out for our best interests. We don’t have that kind of oversight at the moment, but we’re quickly going to need it.
Bad Data and Bad Actors
Why do we need AI regulation?
Many ethical issues arising from AI are due to poorly formed or inadequate inputs such as the data and the basis of knowledge the AI program is provided to work with. Bad data gives bad results, and biased data, that doesn’t actually reflect a real-world situation, will trigger biased machine learning and, therefore, flawed results. Policies or actions based on those results move us backwards, not forwards.
There are also occasions for humans to use AI technology in a more nefarious, possibly criminal way. AI can be used to manipulate, trick, and distort, but one person’s manipulation may also be another person’s “persuasion.” And pre-judging – in targeted advertising, for example – isn’t too different from “customizing.” It’s all a matter of degrees and intent. We know inappropriate AI-driven activity when we see it, but bright lines are often hard to define.
Underlying Standards for AI Ethics
Responsible AI that safely serves humankind (and not just the manipulators, tricksters, and distorters) will need to:
Explainability
Be based on standards of transparency and “explainability,” meaning a common understanding of the input, processing, and output of the AI function. This entails having common, agreed-upon language and definitions as well as measures of success and error. It means there are no black boxes with results or processes that can’t be rationally explained.
Standards
Be based on standards that weigh both the benefits and the risks of AI. This requires stepping back within the research and design phase to consider the harm or downsides that might occur due to missteps in the development of the system or exploitation of the system when it’s released. At the development stage, there’s a tendency to over-promote the positives and downplay concerns, a tendency known as “ethics washing.” Equally concerning is the inclination to ignore the downsides of AI “for now,” while offering assurances that these issues will be addressed as they arise.
Safeguards
Maintain guardrails, safety standards, and protocols to govern high-risk functions of AI, such as voice and facial recognition or social bots. The AI behind deepfakes is a prime example, this amazing technology can be used for legitimate reasons such as movie special effects. Or it can be used for nefarious purposes to misinform, deceive, misrepresent, and much worse. Bots are another example. Appropriately used as digital assistants or as web crawlers to sort and rank information, social bots, on the other hand, can be intentionally used to distort online discussions and spread falsehoods.
Human-in-the-Loop
Ensure there’s always a human in the loop to train, tune, and test AI algorithms to ensure those first three criteria are met. In other words, AI should not be completely overseeing AI development.
Using these standards as guiding posts, AI regulation will need to quickly get into the weeds. We shouldn’t try to create broad enforceable standards that try to address every conceivable application. Instead, we should focus on use cases regarding specific tools, such as facial recognition and business sectors, such as the use of AI in banking or national defense.
Creating an AI Police Force
Who is it that should put these policies in place and enforce them? This is a huge question with no easy answers.
We can’t completely count on legislative bodies to codify and regulate these standards. Policymaking is slow, subject to special interest manipulation, and, yes, even political. These government bodies always seem to address last-generation challenges even as iterative new problems replace the old ones.
Plus, the standards need to be global, which is an even more challenging policy arena. Governments at every level have a measure of independence from industry, which is important to foster trust but makes them less attuned to the urgencies of today.
To be certain, government bodies haven’t exactly been on the sidelines. Nations (e.g., the U.S. and China) and blocs of nations (e.g., the EU) have issued frameworks for oversight and regulation of AI. It’s at least a start.
More likely the policies will be set by regulation, with knowledgeable input from industry tempered by informed regulatory staff who can put that input into context.
As this evolves, though, we’ll need to put the onus on tech companies themselves to establish internal standards and work collectively to develop industry-wide standards. These can serve as points of information and even boundary lines for establishing regulations.
What Else Needs to Happen?
To be certain, I’m a big fan of AI. Our future will be driven by AI technology that can discern, understand, learn, and act far quicker and more reliably than we can. It will give us additional capabilities and we’ll all be better for it.
But in order to enhance trust and in order to keep AI in its proper place, in addition to regulatory safeguards, all of us will need to recognize AI for what it is and what it isn’t. AI involves machine learning and other processes that might be flawed if that learning comes from the processing of incomplete or biased data.
Ninety-nine times out of 100 your map application will get you to your destination. But don’t bet your life on it because there will still be times when it sends you to a dead-end or a road closure. AI programs might be able to predict an election outcome with a level of certainty, but they may still have their own “Dewey Defeats Truman” moments, perhaps because in those political contexts it relies on polling data from people who increasingly hate to be polled.
We’ll also need to develop our own levels of healthy skepticism – to not always believe what we see and read in social media, for example. That will be a major challenge as more and more curated content is AI-distributed, and bot-driven, in a way that reinforces our predilections, to begin with.
For better and worse, AI mimics human thought processes. The cycle of incomplete data inputs, leading to faulty learning, resulting in bad decisions is true with humans as well.
We aspire to live in a world far better than what we have today, and AI is a key piece of the equation for getting us there.
By Futurist Thomas Frey
Author of “Epiphany Z – 8 Radical Visions for Transforming Your Future”
Features Archive
- Green Energy
- Climate Change III
- Climate Change II
- Farming II
- Farming
- Banking VI
- Banking V
- Banking IV
- Politics III
- Politics II
- Politics
- AI Ethics IV
- AI Ethics III
- AI Ethics II
- AI Ethics
- Waste III
- Medicine
- Water IV
- Water III
- Creativity
- Solar Energy II
- Solar Energy
- Fashion
- Fashion II
- Humans IV
- Humans III
- Humans II
- Humans
- Money V
- Money IV
- Money III
- Money II
- Money
- Urban Futures II
- Urban Futures
- Ageing II
- Ageing
- Space IV
- Space III
- Space II
- Space
- Governments
- Metaverse IV
- Metaverse III
- Metaverse II
- Metaverse
- Technology III
- Technology IV
- Technology II
- Privacy III
- Privacy II
- Privacy
- Meat IV
- Meat III
- Meat II
- Meat
- Housing III
- Housing II
- Housing
- Retail
- Insurance
- Logistics II
- Logistics
- Mobile II
- Security III
- Security II
- Language II
- Tourism Post-Covid-19
- Entertainment Post-Covid-19 II
- Entertainment Post-Covid-19
- Cities Post-Covid-19
- Work Post-Covid-19 III
- Work Post-Covid-19 II
- Work Post-Covid-19
- Post-Covid-19 Economy II
- Post-Covid-19 Economy
- Education Post-Covid-19 II
- Education Post-Covid-19
- Post-Covid-19 V
- Post-Covid-19 IV
- Post-Covid-19 III
- Post-Covid-19 II
- Post-Covid-19
- Pandemics V
- Pandemics IV
- Pandemics III
- Pandemics II
- Pandemics
- Food II
- Food
- Futures V
- Futures IV
- Cars V
- Cars IV
- Cars III
- Cars II
- Cars
- Futures III
- Futures II
- Futures
- Education XI
- Education X
- Education IX
- Cities VI
- Cities V
- Cities IV
- AfriCities VIII
- AfriCities VII
- AfriCities VI
- AfriCities V
- AfriCities IV
- AfriCities III
- AfriCities II
- Youth II
- Migrants II
- Foresight IV
- Foresight III
- Higher Education VII
- Agriculture VII
- Work III
- Work/Women
- Cities III
- Carbon Tax
- Higher Education VI
- Higher Education V
- Higher Education IV
- Higher Education III
- Higher Education II
- Higher Education
- Banking III
- Banking II
- Banking
- Humanity VII
- Humanity VI
- Humanity V
- Humanity IV
- Humanity III
- Women V
- Digitalisation of Informal sector
- Islamic Economy
- Drones VII
- Drones VI
- Drones V
- Drones IV
- Drones III
- Drones II
- Drones
- Digital Health III
- Digital Health II
- Digital Health
- Transport IV
- Transport III
- Transport II
- Transport
- Infrastructure V
- Infrastructure IV
- Infrastructure III
- Crime V
- Crime IV
- Crime III
- Crime II
- Crime
- Agriculture VI
- Agriculture V
- Agriculture IV
- Agriculture III
- Agriculture II
- Women IV
- Women III
- Women II
- Women
- Humanity II
- Humanity
- Artificial Intelligence V
- Artificial Intelligence IV
- Artificial Intelligence III
- Universal Basic Income
- Alternative Economies V
- Alternative Economies IV
- Foresight II
- Alternative Economies III
- Additive manufacturing
- Artificial Intelligence II
- AI Risk, Ethics & Regulation
- Waste II
- Mining II
- African Futures IV
- Education VIII
- Justice
- Libraries III
- Libraries II
- Libraries
- Education VII
- Education VI
- Education V
- Green Energy II
- Financial Systems III
- Education IV
- Alternative Economies II
- Research
- Education III
- Artificial Intelligence
- Economic Integration II
- Health Inequity
- Invisible Economy
- Future Thinking
- Pan Africanism VII
- Infrastructure
- Financial Systems
- Sustainability III
- Sustainability II
- Alternative Economies
- Water II
- Mega Trends 2015, 2010s, 2100s?
- AfriCities
- Energy
- Sustainability
- Families
- Prisons II
- Prisons
- Work II
- Work
- Health II
- Pan-Africanism VI
- African Futures III
- African Futures II
- African Futures
- Economic Integration
- Climate Action III
- Manufacturing
- Green Economy
- Climate Action II
- Climate Action
- Foresight
- Ethnicity & Tribalism
- Pan-Africanism V
- Youth Policy II
- Gender Equality II
- Gender Equality
- Youth Policy
- Migrants
- Waste
- Pan-Africanism IV
- Pan-Africanism III - East Africa
- Pan-Africanism II
- Pan-Africanism
- Philanthropy
- Renewable Energy III
- Renewable Energy II
- Renewable Energy
- Informal Cities III
- Informal Cities II
- Informal Cities
- Human Development
- Security
- Global Engaged Citizens - Upskilling for Post Growth Futures, Together
- System Thinking - Systems thinking and courage
- Thrivability II - New Movements
- Thrivability - Bottom Line to Full Circle
- Youth Unemployment
- Food Insecurity II
- Food Insecurity
- Language
- Globesity
- Kenya II
- Kenya
- Mining
- Infrastructure II
- Women & Poverty II
- Women & Poverty
- Cities II
- Cities
- Innovation
- Climate Change
- Agriculture
- Books
- Youth
- Mobile
- Regional Integration IV
- Regional Integration III
- Regional Integration II
- Regional Integration
- Fresh Water
- Education II
- Health
- Education
- Leisure
- Urban Poor
- Economy
- Peace
- Women
- Technology
- Environmental
- Democracy
- The brief called for a blog
- Post-oil
- Game Changers