written 6.2 years ago by |
Fully observable vs. partially observable.
A fully observable AI environment has access to all required information to complete target task. Image recognition operates in fully observable domains. E.g. is 8-puzzle problem, word-block problem, Sudoku puzzle etc.
A partially observable system is one in which the entire state of the system is not fully visible to an external sensor. In a partially observable system the observer may utilize a memory system in order to add information to the observer's understanding of the system.
An example of a partially observable system would be a card game in which some of the cards are discarded into a pile face down. In this case the observer is only able to view their own cards and potentially those of the dealer. They are not able to view the face-down (used) cards, and nor are they able to view the cards which will be dealt at some stage in the future
Single agent vs. multi-agent:
The agent in a single-agent system models itself, the environment, and their interactions. They are independent entities with their own goals, actions, and knowledge. In a single-agent system, no other such entities are recognized by the agent. Thus, even if there are indeed other agents in the world, they are not modeled as having goals, etc. They are just considered part of the environment.
A multi-agent system (M.A.S.) is a computerized system composed of multiple interacting agents interacting within an environment. Multi-agent systems can be used to solve problems that are difficult or impossible for an individual agent or a monolithic system to solve. For e.g.: An agent playing Tetris by itself can be a single agent environment, whereas we can have an agent playing checkers in a two-agent environment.
Deterministic vs. stochastic:
Deterministic AI environments are those on which the outcome can be completely determined by the previous state and action executed by the agent. In other words, deterministic environments ignore uncertainty. Most real world AI environments are not deterministic. Instead, they can be classified as stochastic. Self-driving vehicles are a classic example of stochastic AI processes.
Stochastic means the environment changes while agent is taking action, hence the next state of the world does not merely depends on the current state and agent’s agent. An automated car driving system has a stochastic environment as the agent cannot control the traffic conditions on the road.
Episodic vs. sequential:
In an episodic environment, the performance of an agent is dependent on a number of discrete episodes, with no link between the performances of an agent in different scenarios.
Environments are simpler from the agent developer’s perspective because the agent can decide what action to perform based only on the current episode — it need not reason about the interactions between this and future episodes. Consider an example of pick and place robot agent, which is used to detect defective parts from the conveyor belt of an assembly line.
In sequential environment as per the name suggest the previous decision can effect on mutual decision the next action of the agent depends on what action has to be taken previously and what action is to be supposed taken in the future for example in check out the Facebook and effort all the following is also sequential environment can be understood with the help of an automatic car driving example when current is in can affect the next decisions if agent in initiating breaks then he has to press clutch and lower down the gear as next consequent actions.
For e.g. in checkers where previous move can affect all the moves.
Static vs. dynamic
Static AI environments rely on data-knowledge sources that don’t change frequently over time. If an environment remains unchanged while the agent is performing given tasks then it is called as a static environment. Eg. Vacuum cleaner environment.
If the environment changes while an agent is performing some task, then it is called dynamic environment. Automatic car driver example comes under dynamic environment as the environment keeps changing all the time.
Discrete vs. continuous.
When there are distinct and clearly defined inputs and outputs or percepts and actions, then it is called a discrete environment. For e.g. chess environment has a finite number of distinct inputs and actions.
When a continuous input signal is received by an agent, all the percepts and actions cannot be defined beforehand then it is called continuous environment. E.g. an automated car driving system.
Known vs. unknown
In a known environment, the output for all probable actions is given. Obviously, in case of unknown environment, for an agent to make decision, it has to gain knowledge about-how the environment works.