Takeaway: Do not assume that because AI did not offer
sufficient robustness and accuracy for an application two or
three years ago, that is still the case. It is well worth your
while to stay familiar with the current AI performance levels
in various areas. You may find that a previously impractical
concept is now perfectly feasible.
AI as a Service (AIaaS)
Historically, developing AI solutions has been a carefully
crafted custom process, requiring significant investment in
both highly skilled data scientists and specialized comput-
ing environments to make the most of their talents. For
many instances, this is still the best path, but not in all cases.
Over the last few years, predeveloped models for various
business-friendly purposes, such as image recognition,
speech recognition, language translation and text tran-
scription, have been brought to the market by the major
cloud service providers (CSPs). While Google was perhaps
the first to offer a suite of these pre-engineered services,
AWS and Microsoft Azure now also provide similar offer-
ings. Some examples across vendors are the Google Trans-
late API, AWS Rekognition and Azure Bot Service. These are
offered as APIs that can be easily called from within any
modern business application. To be clear, they do not pro-
vide a complete application to any organization wishing to
build an AI solution. But if the AI needs of a solution fall into
the well-defined capabilities of these standardized CSP
offerings, highly effective AI-driven applications can be
developed quickly and efficiently without having to create a
full AI capability within your enterprise.
Takeaway: Be sure to examine your organization’s plans
for AI projects, to check whether some could gain signifi-
cant speed and efficiency in their development through
AIaaS offerings.
Ethical AI: Bias and Explainability
Perhaps the biggest change related to the business use of
AI over the last few years is the growth of awareness and
concerns about the ethics of AI usage. AI-driven decisions
affecting the consumers of a business’s products have the
power to materially impact those consumers’ lives: What
rates will they pay for insurance coverage? Who gets a
mortgage, and who does not? Who gets hired, and who
gets passed over?
72 | THE DOPPLER |
SUMMER 2019
Bias
The first ethics concern has to do with the bias that is
implicit in the data used to train and develop AI models. For
example, when an AI model is trained on visual data that
under-represents women or people of color, the resultant
model will be less accurate in recognizing members of those
under-represented groups. When trained on data capturing
previous hiring decisions, past biases can be learned and
built into a model’s decision-making. It is important to note
that these concerns are not just theoretical. Specific exam-
ples of such problems have been documented in AI tools,
and in projects from companies such as IBM, Microsoft and
Amazon, with serious potential consequences.
The good news is that researchers are now making signifi-
cant progress in addressing these problems, through a
combination of after-the-fact auditing of results to identify
models’ resultant bias, as well as specific data handling and
modeling techniques designed to minimize bias.
Explainability
Historically, various data science algorithms used to make
decisions were often directly transparent. This means that
for any individual decision made using the algorithm, the
reasons for that decision (approve/do not approve, put in
this category vs. that category, etc.) could be directly con-
firmed by tracing the decision path. With some new classes
of AI, such as deep learning, this is no longer the case. A
given model can be highly accurate, yet still opaque as to
confirming why any particular decision was made. These
models are, in effect, black box decision-making machines.
For many uses of deep learning, this lack of explainability is
not really important, but for other use cases, it may be sig-
nificant. Data protection laws in various jurisdictions are
starting to require the ability to document the provenance
of these decisions, with Europe’s GDPR a prime example.
Aside from regulation, as the use of deep learning models
expands, it is clear that more and more decisions made for
consumers might be subject to scrutiny for liability
reasons.
Because of this, the industry has developed the concept of
explainable AI (XAI). This term is used to refer to the vari-
ous approaches being developed to address the lack of
explainability in AI-driven decisions. No currently known