Martin, Carlos. (2022). Why AI Needs a Strong Moral Compass for a Positive Future. Spiceworks.

In 2023, we’ll start to see a  methodology of feeding data, testing, and monitoring outcomes that ensures a moral compass for our algorithms, just like Asimov envisioned the three basic laws of robotics 80 years ago. Carlos Martin, co-founder and CEO of macami.ai, emphasizes on the need for morally sound AI evolution as we keep growing together in our collaboration.

In 1942, author Isaac Asimov introduced the three laws of robotics with his short story Runaround:

  • First law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • Second law: A robot must obey the orders given to it by human beings except where such orders would conflict with the first law.
  • Third law: A robot must protect its own existence as long as such protection does not conflict with the first or second law.

In this story and the subsequent ones such as I, Robot, Bicentennial Man, or the Robot series, Asimov pictured a world where humanity was far more technologically advanced than ours, where intelligent robots helped with daily tasks in the household were a commodity, where they were companions to the elderly, nannies for children and were also workers. These robots had what was called a positronic brain, a type of CPU that managed and processed their vision, intelligence, and motor functions.

In our day and available to businesses and the general mass is what is called narrow artificial intelligence; this type of artificial intelligence is accessed through separate channels or APIs depending on the task at hand. For example, for natural language processing, one must connect to one or more specific channels/APIs that provide that type of AI. For Computer Vision, one must also connect to one or more specific channels/APIs. The same is the case with Machine Learning algorithms. We do not have an AI that encompasses all today except experimentally. There are, of course, experimental AI technologies that try to take it to the next level, known as General Artificial Intelligence or even further ones, such as like the AI called PLATO, inspired by research on how babies learn (PLATO standing for Physics Learning through Auto-encoding and Tracking Objects).

See More: Six Ways Artificial Intelligence is Transforming the Financial Industry

The Deep Impact of AI on Our Lives

The technology used by businesses and the general public is what is considered Narrow AI. Even with AI being in a real period of infancy, we are seeing a deep impact on our lives. We’re seeing that social media is giving us more content that reaffirms our beliefs, even if they may be wrong. If you’re someone that believes the moon landing was fake, social media’s AI will find more content to keep you interested. It is cold. It does not care if there are a million other facts that prove otherwise – it will still feed you what you want. Why? It’s simple: money. Those algorithms can feed you more ads by keeping your eyeballs busy with that content. Plain and simple.

AI Is Not Fallible: Who Bears the Onus of Morality?

We have cases where an AI has incorrectly identified a person of interest, or where AI vision has more problems identifying people of color, or the case of the Kentucky court system, using an AI algorithm to assess a person’s risk to determine the possibility of bail only to find out later that the system has disproportionately determined blacks as higher risk, whereas previously, there was little difference. We have seen AI algorithms discard people’s resumes based on their age. There is also the case of Tay, Microsoft’s AI, which in less than 16 hours, Twitter taught to be a racist jerk where it began posting inflammatory and offensive tweets through its Twitter account, causing Microsoft to shut it down.

The difference with AI and previous coding methods is that AI is, for the most part, is a statistical algorithm, whereas previous coding methods or languages are deterministic, if-then-else flows. Traditionally, we have seen the practice of coding evolve into something more rigorous, and several methodologies and practices have evolved: waterfall, Agile, Scrum, etc. practices and regulations have also evolved to protect information such as PCI (to protect a card holder’s information), or HIPAA (to protect a patient’s information), etc.

The purpose of these methodologies and practices is precisely to bring order to the chaos of development, to force planning and design practices and to bring rigorous testing methods to any development underway. The end objective is to have solid, resilient software solutions that solve needs and also protect people’s and businesses interests.

As mentioned, artificial intelligence algorithms are different. Pedro Domingos, a professor at the University of Washington, very well put it in his book, The Master Algorithm: “Learning algorithms are the seeds, data is the soil, and the learned programs are the grown plants. The machine-learning expert is like a farmer, sowing the seeds, irrigating and fertilizing the soil, and keeping an eye on the health of the crop but otherwise staying out of the way.” There is, as of today, no commonly accepted methodology to feed data to the current machine learning algorithms. There are also no guardrails that help determine right from wrong in these algorithms.

In October 2022, the White House released the blueprint for the AI Bill of Rights, which in essence, attempts to establish that AI systems must be safe, effective, free of discrimination, respect data privacy, make their use known, and have an extensive structure of human oversight. In my opinion, this is a great attempt at a start, and yet, it does not cover the most poignant and obvious issue: it focuses on the end result rather than the beginning. Allow me to explain: for an AI system to be compliant with all the requirements of the AI Bill of Rights, it must be fed data first. Plain and simple.

I believe that in 2023, we will start seeing the proliferation and maturing of new methodologies to feed data and test the results of AI algorithms. And in my opinion, we need to think of these algorithms in a similar manner as we think of the need of humans to have morality and values, to value fairness and manners. It is hard to think of them as needing a moral compass, but the reality is that these algorithms affect human life, which is not always for the better.

See More: Why You Should Apply Caution When Using AI in Code Development

In Collaboration for the Future

When we implement AI models, we need to have steps, planning and design taking place. These are already common practices in the traditional software development process. Why not in AI? We need to be asking questions such as does this model apply to all colors and genders? Has this model been fed equal amounts of sample data that fairly represent the people it will affect? Do the outcomes of this model protect the rights of the citizens equally? Does this algorithm have a way to identify something it should not say?

In short, AI is a more evolved technology than traditional software. Why not have a methodology of feeding data, testing, and monitoring outcomes that ensures a moral compass for our algorithms, just like Asimov envisioned the three basic laws of robotics 80 years ago?