A simple definition of artificial intelligence (AI) is machines/computers that can think, ie smart machines. The concept was born in the 1950s. Visit (Anyoha, 2017) for a succinct history of AI. AI even shows up in the 1950s-themed comedy show, ‘The Marvelous Mrs. Maisel’. We are now in the third or fourth wave of AI coming into the business world. SHABNAM MOKHTAR writes.
There are two types of AI:
- Narrow AI: the machine solves a specific problem, eg instead of a general x-ray, computer vision technology is used to detect a particular lung tumor.
- General AI: the machine is expected to perform any task that a human can do. For example, a robot is able to make a cup of coffee in different households — it needs to find the coffee machine, water, milk sugar, etc, in different locations in different homes.
We are still very much focused on the development of narrow AI. General AI still has a long way to go. Increasingly, AI experiments nowadays are facilitated by three factors: connectivity, data availability and computing power.
Since AI refers to smart machines, it is commonly confused with machine learning (ML). ML is a subset of AI. An important one for sure, ML is not the only component of AI. Figure 1 summarizes the three components of AI: input, process and output.
Figure 1: Core components of AI
Let’s discuss input and output of AI first, then we will discuss the processing engine. Input for AI (ie data) could come from the following sources:
- Sensors: eg a web crawler detects what is written on a website, or sensors on physical assets collect and feed data for processing.
- Human machine interface: this is basically a human entering data into an AI system using different tools (mouse, touchpad, tablet, microphone, etc)
These data will be processed by the engine (next section), and the output is basically the results. This could take a physical form like a self-driving car, or our coffee-making robot. Or, it could be in the form of results in your app or platform.
Processing = Engine = ML
The data collected from the input will be processed by the computation engine. Examples of the engines include an expert system (ES) and ML. In an ES, the computer tries to emulate a human expert. The system will have a large knowledge base, and the expert will specify the rules (commonly based on ‘if-then’ rules). So when users ask the ES a question, it will be able to answer like an expert. The simplest version of the ES is also known as a rule-based system, which is basically the simplest version of AI. The main benefit of an ES is automation, speed and reduced error.
ML is a more complex AI engine. In simple terms, ML refers to computers finding solutions/rules on their own. It is not merely about automation. Let’s take an example of determining the tax treatment for import transactions.
Table 1: Automation vs. ML
|Computers find rules on their own (computers ‘think’)|
|➤ Subject matter expert (SME) explains the tax rules|
➤ SE codes the rule into the system
|➤ SE writes code to allow the machine to learn PATTERNS from data|
➤ The machine finds patterns and determines the rule on its own
In a rule-based system, the tax expert defines what the tax rules are and the software engineer will code the rules. The computer just executes/retrieves the rules. In other words, in automation, the computer is not ‘thinking’ too deeply. It is merely executing an order. The code may look something like in Figure 2.
Figure 2: Rule-based system computer code
In an ML system, the code is structured to allow the computer to LEARN patterns from data and DETERMINE a rule. So the computer thinks deeper. The code may look something like in Table 2.
Table 2: ML tax coding
Even the role of the software engineer/data scientist in an ML system changes as the codes are written to allow the computer to find patterns beyond merely executing rules.
So how could the computer think deeply? One way is by finding patterns using mathematical formulas (eg K-means clustering). ML is basically the workhorse, the engine that does the mathematical calculations: linear algebra for example. In linear algebra, a lot of parallel computation is conducted — recall the calculation of matrices and vectors — simple but repetitive calculation. For this reason, the more complex processing engine uses chips that are used for gaming (known as GPU — graphics processing unit) — which may not be the fastest chip, but allows you to conduct many parallel calculations. This is also the reason why NVIDIA (one of the major producers of GPUs) transformed its business from focusing on the gaming market into more diverse AI applications including computer vision processing and self-driving cars.
In short, ML is one of the engines of AI. Deep learning, which involves the neural network, is a subset of ML, ie a more complex form of machine learning. Deep learning is a more powerful engine. It involves a lot of linear algebra calculations and is very data-hungry.
Although ML and deep learning are able to perform more complex ‘thinking’, each requires certain types of data and a larger amount of data compared to an ES. So the best approach is not to choose the fanciest technology, but evaluate the task you are trying to solve and the amount and type of data you have.
If your project involves structured data with small sets of documents that are similar, a rule-based AI is more suitable. If you are dealing with large sets of documents with large variances, ML-based AI provides you with the most efficient coverage and analysis. Nonetheless, keep in mind that one of the challenges with a more complex ML algorithm is the black box syndrome. You cannot tell how the machine made the decision: It is not easy to interpret.
In conclusion, technology is not a magic pill that solves everything. Used wisely, it enhances efficiency and accuracy. If you plaster technology to a badly designed process, you will end up with a dreadful digital outcome. Do not select a technology and set up a process around that. Focus on solving a problem at hand, and choose the right technology for it. Tech should serve the process, not the other way around.
I’d like to close with a story shared by an acquaintance of mine in Los Angeles. He said: “I’ve spent two weeks training this bot, and I am not even close to getting it where it needs to be.” Keep in mind, even with good tech and process design, it takes time to train the machines. Mrs Maisel fans, just think of Prof Weissman trying to train his computer to sing the potty song at Bell Labs in 1959 (Season 2, Episode 8)!
Shabnam Mokhtar is the group executive vice-president of SHAPE Knowledge Services. She can be contacted at [email protected].