Introduction to Artificial Intelligence
Artificial Intelligence and related fields, brief history of AI, applications of AI, Definition & importance of knowledge & learning, Agent & its type and performance measures.
Intelligence
Intelligence is more than mere knowledge. It represents many skills such as planning, prediction, evaluation, solving and learning. Other skills such as perception and communication can also be included.
Artificial Intelligence (AI)
In very simple terms, AI is the ability to build computer systems that are capable of working like a human intelligence such as visual perception, speech recognition, decision making and translation between languages.
Development of AI
Charles Babbage and the Analytical engine (1842)
Boole, An investigation into the Laws of Thought (1815-64)
Rullell and Wittgenstein, work on logic and language(1889-1951)
Turing Test (1951) - explained later
Claude Shannon, Programming a computer for playing chess (1950)
Evans and the ANALOGY program to solve geometrical IQ tests - eplained later
Symbolic integration programs leading to packages such as MACSYMA, MAPLE, SMP and MATHEMATICA
Roberts, WalZ's line-labelling to explain shape form imagery (1965)
Rule-based expert systems developed in 1970s
Behaviors of the AI
The behaviors of the AI are as follows:
The Turing Test:
The Turing test, proposed by Alan Turing (1950) was designed to convince the people that whether a particular machine can think or not. He suggested a test based on indistinguishability from undeniably intelligent entities-human beings. The test involves an interrogator who interacts with one human and one machine. Within a given time the interrogator has to find out which of the two the human is, and which one the machine.
The computer passes the test if a human interrogator, after posing some written questions, cannot tell whether the written response came from the human or not.
To pass a Turing test, a computer must have following capabilities:
Natural Language Processing: Must be able to communicate successfully in English
Knowledge representation: To store what it knows and hears.
Automated reasoning: Answer the Questions based on the stored information.
Machine learning: Must be able to adapt in new circumstances.
Turing tests avoid the physical interaction with human interrogators. Physical simulation of human beings is not necessary for testing intelligence.
Application of AI
Game Playing
Speech recognition
Understanding natural language
Computer vision- 2D, 3D
Expert System =>Diagnosis of Bacterial infection and suggest treatment
Heuristic classification => Credit card purchase and accept
Knowledge:
Knowledge is the ability to convert data and information in effective actions.
Learning:
It is concerned with designs and development of algorithms that allow computers to evolve behaviors based on empirical data such as from sensor data.A major focus of learning is to automatically learn to recognize complex patterns and make intelligent decisions based on data.
Agent:
An agent is anything that perceives the environment and takes action on the environment.
If the action of an agent is right/rational on the basis of given information, then it is known as rational agent.
Agent → Percept → Decision → Action
Goals of Agent:
High performance
Optimized result
Rational Action (right action)
PEAS based grouping of Agents
PEAS stands for Performance, Environment, Actuators, and Sensors.
Performance:
The output which we get from the agent. All the necessary results that an agent gives after processing comes under its performance.
Environment:
All the surrounding things and conditions of an agent fall in this section. It basically consists of all the things under which the agents work.
Actuators:
The devices, hardware or software through which the agent performs any actions or processes any information to produce a result are the actuators of the agent.
Sensors:
The devices through which the agent observes and perceives its environment are the sensors of the agent.
EXAMPLE:
Consider the task of designing an automated Car
Performance measure: Safe, fast, comfortable trip, maximize profits
Environment: Roads, other traffic, pedestrians
Actuators: Steering wheel, accelerator, brake, signal, horn
Sensors: Cameras, speedometer, GPS, odometer, engine sensors
Medical diagnosis system
Performance measure: Healthy patient, minimize costs, lawsuits
Environment: Patient, hospital, staff
Actuators: Screen display (Questions, tests, diagnoses, treatments, referrals)
Sensors: Keyboard (entry of symptoms, findings, patient's answers)
Part picking robot
Performance measure: Percentage of parts in correct bins
Environment: Conveyor belt with parts, bins
Actuators: Jointed arm and hand
Sensors: Camera, joint angle sensors
Satellite image analysis system
Performance measure: Correct categorizations
Environment: Downlink from orbiting satellite
Actuators: Display categorization of scene
Sensors: Colour pixel arrays
Refinery controller
Performance measure: maximize purity, yield, safety
Environment: Refinery, operator
Actuators: Valves, pumps, heater, displays
Sensors: Temperature, pressure and chemical sensors
Types of Agent:
Based on their degree of perceived intelligence and capability. Agents are of following types:
1. Table-driven agents
use a percept sequence/action table in memory to find the next action. They are implemented by a (large) lookup table.
2. Simple reflex agents
This agent works only on the basis of current perception and it does not bother about the history or previous state in which the system was.
This type of agent is based upon the condition-action rule. If the condition is true, then the action is taken, else not.
PROBLEMS FACED:
Very limited intelligence.
No knowledge about the non-perceptual parts of the state.
Operating in a partially observable environment, infinite loops are unavoidable.
3. Model based reflex agents
It works by finding a rule whose condition matches the current situation.
It can handle partially observable environments.
Updating the state requires information about how the world evolves independently from the agent and how the agent actions affect the world.
4. Goal based reflex agents
The goal based agent focuses only on reaching the goal set and hence the decision took by the agent is based on how far it is currently from their goal or desired state.
Their every action is intended to minimize their distance from the goal.
This agent is more flexible, and the agent develops its decision making skill by choosing the right from the various options available.
5. Utility based agents
These agents are more concerned about the preference(utility) for each state. When there are multiple options available, the utility based agent takes the decision on the basis of how much satisfaction the agent gets from it.
This approach was like somewhat adding emotions to the agent, because, after taking any decision, the agent ensures that "how happy I Am after taking this decision?".
This agent was developed because sometimes achieving the desired goal is not enough. We may look for quicker, safer and cheaper alternate to reach the destination.
Simple reflex agents Model based reflex agents
Goal based reflex agents Utility based agents
No comments:
Post a Comment