As artificial intelligence becomes more complex, evolving technologies and the jargon associated with it might sound unfamiliar or strange to you. In this article, we have compiled the most important terms that are related to AI for you to flaunt in your next meeting.
Analogical Reasoning: The term analogical generally refers to non-digital data but when it comes to the field of AI, analogical reasoning is the process where people (scientists) draw conclusions based on past results. It is more like predicting stock markets.
Artificial Neuron Networks: Or connectionist systems is not an algorithm, but rather a framework for many different machine learning algorithms to work together and process complex data input
Autonomic computing: Refers to the self-managing characteristics of distributed computing resources, adapting to unpredictable changes while hiding intrinsic complexity to operators and users.
Backpropagation: Is a method used in artificial neural networks to calculate a gradient that is needed in the calculation of the weights to be used in the network. It is commonly used to train deep neural networks, a term referring to neural networks with more than one hidden layer.
Backward chaining: It is used in automated theorem provers, inference engines, proof assistants, and other artificial intelligence applications.
Bayesian programming: Bayes’ Theorem is the central concept behind this programming approach, which states that the probability of something occurring in the future can be inferred by past conditions related to the event
Behaviour informatics: (BI) is the informatics of behaviour analysis as to obtain behaviour intelligence and behaviour insights
Behaviour tree: A Behavior Tree (BT) is a mathematical model of plan execution used in computer science, robotics, control systems and video games. They describe switchings between a finite set of tasks in a modular fashion
Case-based reasoning(CBR): Broadly construed, is the process of solving new problems based on the solutions of similar past problems.
Data mining: is the process of discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems.
Decision boundary: In the case of backpropagation based artificial neural networks or perceptrons, the type of decision boundary that the network can learn is determined by the number of hidden layers the network has.
Decision tree learning: Uses a decision tree (as a predictive model) to go from observations about an item (represented in the branches) to conclusions about the item’s target value (represented in the leaves).
Evolutionary algorithm: Is a subset of evolutionary computation, a generic population-based metaheuristic optimization algorithm. An EA uses mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection.
Feature extraction: In machine learning, pattern recognition and in image processing, feature extraction starts from an initial set of measured data and builds derived values (features) intended to be informative and non-redundant, facilitating the subsequent learning and generalization steps, and in some cases leading to better human interpretations.
Feature selection: In machine learning and statistics, feature selection, also known as variable selection, attribute selection or variable subset selection, is the process of selecting a subset of relevant features (variables, predictors) for use in model construction.
Forward chaining: Or forward reasoning is one of the two main methods of reasoning when using an inference engine and can be described logically as repeated application of modus ponens. Forward chaining is a popular implementation strategy for expert systems, business and production rule systems. The opposite of forwarding chaining is backward chaining.
Generative adversarial network (GAN): Is a class of machine learning systems. Two neural networks contest with each other in a zero-sum game framework.
Genetic algorithm (GA): Is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to generate high-quality solutions to optimization and search problems by relying on bio-inspired operators such as mutation, crossover and selection
Graph database (GD): Is a database that uses graph structures for semantic queries with nodes, edges and properties to represent and store data.
Incremental learning: Is a method of machine learning, in which input data is continuously used to extend the existing model’s knowledge
Named-entity recognition (NER): Also known as entity identification, entity chunking and entity extraction) is a subtask of information extraction that seeks to locate and classify named entity mentions in unstructured text into pre-defined categories such as the person names, organizations, locations, medical codes, time expressions, quantities, monetary values, percentages, etc.
Pattern recognition: Is the automated recognition of patterns and regularities in data. Pattern recognition is closely related to artificial intelligence and machine learning, together with applications such as data mining and knowledge discovery in databases (KDD), and is often used interchangeably with these terms.
Reinforcement learning (RL): Is an area of machine learning concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward. Reinforcement learning is considered as one of three machine learning paradigms, alongside supervised learning and unsupervised learning
Spatial-temporal reasoning: Iis an area of artificial intelligence which draws from the fields of computer science, cognitive science, and cognitive psychology. The theoretic goal—on the cognitive side—involves representing and reasoning spatial-temporal knowledge in mind.
Unsupervised learning: Is a term used for Hebbian learning, associated to learning without a teacher, also known as self-organization and a method of modelling the probability density of inputs.