Artificial Intelligence (AI) is an overarching term that encapsulates information systems capable of perceiving their environment, learning, reasoning, and subsequently executing actions aligned with their objectives
The defining characteristic of AI is its capacity for learning, setting it apart from sophisticated algorithms employed in scoring, speech recognition, and image generation, as well as from virtual assistants that provide predetermined guidance in various life situations
Based on the training methodologies employed, there are several subsections of it:
Sometimes the term “artificial intelligence” and the names of subsections are used interchangeably, but in fact, specific instances of AI.
Let’s analyze separately what is behind each type of training.
Machine Learning (ML) revolves around devising algorithms and models that identify concealed patterns. The learning process occurs through the generalization of many practical examples. ML systems efficacy increases with the accumulation of knowledge regarding the subjects under scrutiny. Applications of ML include image recognition, content recommendation, suspicious activity detection, and more
Deep Learning (DP) is a subset of ML that utilizes multi-layered neural networks, enabling autonomous identification of algorithms to address problems. High volumes of data are required to achieve efficiency and accuracy during the learning process.
Neural Networks (NN) are models inspired by the human brain’s functionality. Similar to neurons and synapses constituting the brain, numerous computational centers interconnect and interact within a neural network. Typically, NN possesses multiple layers, allowing parallel information processing. Neural networks are trained on labeled datasets with explicit patterns, after which the AI can process unlabeled data.
Natural Language Processing (NLP) is a discipline that emerged at the intersection of machine learning and mathematical linguistics. Natural language, the language utilized by humans, facilitates communication with AI systems. Language processing itself consists of comprehension and generation components. High-quality generation entails the capability to analyze and synthesize unstructured data from various sources.
Computer Vision (CV) is a branch of ML that endows information systems with the ability to “see” and extract data from visual images. Models automate tasks that rely on human vision, such as examining photographs, identifying their content, and interpreting the acquired information. Images and videos serve as educational materials. The primary objective is to collect, process, analyze, and comprehend digital images while extracting the data they encompass.
The concept of AI refers to models created to solve a specific problem or provide a specific service. For example, ChatGPT learns to enhance chat communication, but cannot learn other tasks. Such AI is also called weak or narrow (Narrow AI).
“Artificial General Intelligence” (AGI) or strong AI, on the other hand, denotes software capable of learning any subject matter and solving any problem.
In theory, AGI can self-learn, can set new goals for itself, determine ways to achieve them, and evaluate the quality of the result. Strong AI must also be able to act in conditions of uncertainty.
AGI does not exist yet, and there are discussions in the IT industry about how to create it, and whether it can be done in principle. Constraints on developing strong AI encompass fundamental issues of methodology and architecture, as well as restricted access to high-performance computing resources. Moreover, the prototype of AI is the human mind, but people’s thinking is mostly non—algorithmic, and this creates additional challenges.