How positive should we be about AI’s rise?

Amanda Patience
3 min readJan 4, 2023

The topic of “Will AI replace humans?” has been hanging over our heads for a while, but is AI really as good as we think it is?

Astronomical growth has been experienced in the field of artificial intelligence in recent years. It’s become harder and harder to ignore AI’s influence in our daily lives, from utilizing face recognition to unlock our phones to using streaming apps that use algorithms. Aside from that, AI has made a lot of previously unthinkable things possible, and its capabilities are still growing. For instance, Tony Walsh, a renowned AI researcher, believes that by the year 2062, AI will be intelligent enough to compete with humans.

Tony Walsh, however, has spoken out against immoral uses of AI and believes that because autonomous systems are developed without connection to human values, we will continue to face challenging scenarios.

In the future years, AI has the potential to drastically alter civilization, but at what cost?

Organizations must take into account AI’s broader effects and avoid having narrow future perspectives. Since it might be challenging to imagine what that actually involves, many organizations are currently debating the ethical application of AI.

AI bias and how it affects organizations

One can believe that any issues brought on by bias in people can be resolved through technology. Our prejudices do, however, creep into the creation and application of technology. Inequality, bias, and discrimination can be intentionally and unintentionally perpetuated via the use of data sets that create unfair or inaccurate outcomes.

Furthermore, unlike conventional computing techniques, machine learning algorithms aren’t founded on exact mathematical calculations. Ambiguity is possible in both the inputs and the outputs. Teams developing AI must deal with competing data bias that inherently influences the system’s quality. A noteworthy instance occurred in 2016 when Reuters revealed that Amazon has attempted to implement AI into their employment process. The business wished to automate the screening of several resumes. Surprisingly, though, their new hiring tool did not like women. For the AI training, Amazon used the resumes of its current employees, who were primarily men.

As a result, the algorithm favored male candidates over female candidates. Amazon decided to discontinue the AI-based hiring tool because it was unable to find a way to make the system gender-neutral.

What can be done, then?

When creating a human-like technology with AI, it is important to take into account human creativity, empathy, and dexterity. Machines should be governed by the same moral standards and ideals as people. Organizations using AI must be cognizant of any subliminal impacts that may have an impact on the morality of their business practices. The most important thing is that ethical AI should not be constrained by what is allowed by legislation. Even if an AI program has a tendency to steer individuals toward undesired conduct, it is unethical.

Fundamental principles including individual rights, privacy, non-discrimination, and non-manipulation must be incorporated into the design of ethical AI. AI development companies must avoid ethical hazards and have frank dialogues about issues that occur from gathering enormous amounts of data, especially when that data is used to train machine learning algorithms. An operationalized approach will aid in eliminating AI ethical risk in an enterprise, just like other risk management techniques. AI biases can be lessened. After all, the quality of AI depends on the data it is fed.

You can follow me on Twitter too!

--

--

Amanda Patience

Creative Writer, Techprenuer and Team Lead at Zuk Technologies.