The second change is related to the availability
and cost of computing power.
Computing power is the foundation of AI. Right
now, it is a costly and scarce resource. While
growth in computing power has been a major
driver behind progress in AI, a lack of readily
available and affordable computing power is
becoming a constraint that holds back broad-
scale AI adoption.
We need to provide more abundant and
affordable computing power in the future. We
should take action now to meet this demand.
The third change involves AI deployment.
Hybrid clouds have become a major cloud service
model for enterprise use. Right now, AI is deployed
mostly in the cloud, with only a small portion at the
edge. AI has not yet been closely integrated into
business environments.
AI should be pervasive. Furthermore, it should
be adaptable to all scenarios, and in all cases,
user privacy must be respected and protected.
The fourth change involves the efficiency and
security of algorithms.
Algorithms are another driver behind AI
development. The majority of the basic
algorithms we use today were invented before
the 1980s. As AI comes into wider use, the
weaknesses of existing algorithms are becoming
more apparent.
Algorithms of the future should be data-
efficient. That means they can deliver the same
results with less data. Future algorithms should
also be energy-efficient, producing the same
results with fewer computations and less energy.
Algorithms must be secure and explainable.
Algorithms like these will set the stage for wide-
scale AI development.
The fifth change involves AI automation.
At present, AI projects are labour-intensive,
especially during the data labelling process. This
requires so much labour, in fact, that specialised
“data labeller” jobs have begun to emerge. There
is even a running joke in the industry: “No labour,
no intelligence.”
Moving forward, we must greatly increase
AI automation to achieve automated or semi-
automated operations, especially during
processes like data labelling, data collection,
feature extraction, model design, and training.
The sixth change is about the practical
application of AI.
In June 2018, Benjamin Recht, an associate
professor at UC Berkeley, released a paper
with a perplexing title: “Do CIFAR-10 Classifiers
Generalise to CIFAR-10?” According to the paper,
models that perform with high accuracy in one
test set of CIFAR-10 classifiers are 5% to 15% less
accurate in another test set that closely resembles
CIFAR-10, which Recht himself developed. This
indicates a large decrease in the practical
application of any given model.
It is clear that many high-performing models
and algorithms perform better in tests than in
real-world execution.
Industrial-grade AI models of the future must
be able to meet the needs of real-world execution.
It is not enough to perform well in test sets alone.
The seventh change involves model updates.
The accuracy of any given model should not be
static, as accuracy changes with data distribution,
application environments, and hardware
environments. Keeping accuracy numbers within
an acceptable scope is necessary for enterprise
applications. Existing model updates, however,
are not done in real time. They rely on human
input at fixed intervals. It is a semi-open loop
system.
We believe that the models of the future
need to be adaptive to changes and updated in
real time. This represents a real-time, closed-
loop system that helps enterprise AI applications
continue to operate in an optimal state.
The eighth change involves synergy between AI
and other technologies.
Every GPT delivers maximum economic value only
when it is combined with other technologies. AI
is no exception. But current discussions on AI
more often than not focus entirely on AI, with no
mention of other technologies.
In the future, we need to promote greater
synergy between AI and other technologies,
including cloud, Internet of Things (IoT), edge
computing, Blockchain, big data, and databases.
This is the only way to fully unleash the value of AI.
45