Whose Values Should be Ingrained in Algorithms?

Whose Values Should be Ingrained in Algorithms?

Journal of AI, Robotics & Workplace Automation
Originally published in Journal of AI, Robotics & Workplace Automation

All tools can be used for good or ill. AI is such a robust tool that attention must be paid to the ways it can be intentionally misused. Bad actors exist and will not be able to resist the power AI offers. Governments will need to collaborate to protect us from malevolent people and organizations.

However, the more insidious aspect of AI is the unintentional harm it can cause.

The danger lies in modelers' tightly focused expertise and the bias that is inherent in data. As a result, a lack of world view, a scarcity of systems thinking, and a shortage of historical reference can easily combine to create exclusion, inequity, and injustice. We must hold our data scientists to a high standard of care and take a lateral look at the human values captured in data and reinforced by algorithms.

We must carry on a very deep conversation about whose values should be ingrained in algorithms. Our technology has advanced to the point where we must question humanity's perception of itself and make lasting moral decisions.

This will be no small task and one that technology is ill prepared to assist. While the majority of humans can agree that nuclear weapons are immoral, we still preserve our arsenals. While the majority of humans can agree that global warming is a tragedy that must be avoided, we still burn fossil fuels.

Our ultimate challenge is coming to a moral consensus across cultural divides.

Journal of AI, Robotics & Workplace Automation


John Thompson

Serial innovator, Keynote speaker, and Author of 4 books on AI & Data. Successful in leading GenAI & FoundationalAI Product & Go To Market teams in creating & delivering products & solutions that drive results at scale.

3mo

Mine, of course.

To view or add a comment, sign in

Explore topics