A First Course in Artificial Intelligence
- Length: 342 pages
- Edition: 1
- Language: English
- Publisher: Bentham Science Publishers
- Publication Date: 2021-07-13
- ISBN-10: 168108855X
- ISBN-13: 9781681088556
- Sales Rank: #0 (See Top 100 Books)
The importance of Artificial Intelligence cannot be over-emphasised in current times, where automation is already an integral part of industrial and business processes.
A First Course in Artificial Intelligence is a comprehensive textbook for beginners which covers all the fundamentals of Artificial Intelligence. Seven chapters (divided into thirty-three units) introduce the student to key concepts of the discipline in simple language, including expert system, natural language processing, machine learning, machine learning applications, sensory perceptions (computer vision, tactile perception) and robotics. Each chapter provides information in separate units about relevant history, applications, algorithm and programming with relevant case studies and examples. The simplified approach to the subject enables beginners in computer science who have a basic knowledge of Java programming to easily understand the contents. The text also introduces Python programming language basics, with demonstrations of natural language processing. It also introduces readers to the Waikato Environment for Knowledge Analysis (WEKA), as a tool for machine learning.
The book is suitable for students and teachers involved in introductory courses in undergraduate and diploma level courses which have appropriate modules on artificial intelligence.
Welcome Table of Content Title BENTHAM SCIENCE PUBLISHERS LTD. End User License Agreement (for non-institutional, personal use) Usage Rules: Disclaimer: Limitation of Liability: General: PREFACE CONSENT FOR PUBLICATION CONFLICT OF INTEREST ACKNOWLEDGEMENT Introduction to Artificial Intelligence Abstract 1. DEFINITION OF ARTIFICIAL INTELLIGENCE 1.1. Artificial Intelligence 1.1.1. Explanation of Artificial Intelligence 1.1.2. Turing Test Model – Acting Like Human 1.1.3. Cognitive Model – Thinking Like Human 1.1.4. Rational Agent Model – Acting Rationally 1.1.5. Law of Thought – Thinking Rationally 1.2. Foundational Discipline in Artificial Intelligence 1.2.1. Philosophy 1.2.2. Mathematics 1.2.3. Psychology 1.2.4. Computer Engineering 1.2.5. Linguistics 1.2.6. Biological Science and Others 1.3. Conclusion 1.4. Summary 2. HISTORY OF ARTIFICIAL INTELLIGENCE AND PROJECTION FOR THE FUTURE 2.1. The Birth of Artificial Intelligence 2.1.1. Alan Turing (1912 – 1954) 2.1.2. Other Significant Contributors Prior to Birth of AI 2.2. Historical Development of Other Artificial Intelligence Systems 2.2.1. Expert System (1950s – 1970s) 2.2.2. First Artificial Intelligence Winter (1974 – 1980) 2.2.3. Second Artificial Intelligence Winter (1987 – 1993) 2.2.4. Intelligent Agent (1993 – Date) 2.3. Projections into the Future of Artificial Intelligence 2.3.1. Virtual Personal Assistants 2.4. Conclusion 2.5. Summary 3. EMERGING ARTIFICIAL INTELLIGENCE APPLICATIONS 3.1. Artificial Intelligence Applied Technologies 3.1.1. Blockchain Technology 3.1.1.1. Bitcoin: First Application of Artificial Intelligence to Blockchain 3.1.1.1.1. Bitcoin Wallet 3.1.1.1.2. Peer-to-peer 3.1.1.1.3. Miners 3.1.1.1.4. Transaction 3.1.1.1.5. Earning Reward 3.1.1.2. Applications of Artificial Intelligence to Blockchain Technology 3.1.1.2.1. Smart Computing Power 3.1.1.2.2. Analyses Diverse Data 3.1.1.2.3. Analyses Protected Data 3.1.1.2.4. Monetizes Data 3.1.1.2.5. Decision Making 3.1.2. Internet of Things (IoT) 3.1.2.1. History of Internet of Things 3.1.3. Data Science, Big Data and Data Analytic 3.2. Artificial Intelligence Products 3.2.1. IBM Watson 3.2.2. Self-Driving/Autonomous Cars 3.2.3. Face Recognition System 3.3. Conclusion 3.4. Summary CONCLUDING REMARKS REFERENCES Expert System Abstract 1. EXPERT SYSTEM BASICS 1.1. Components of Expert System 1.1.1. Human Expert 1.1.2. Knowledge Engineer 1.1.3. Knowledge Base 1.1.4. Inference Engine 1.1.5. User Interface 1.1.6. Non-Expert User 1.2. Knowledge Acquisition 1.2.1. Knowledge Elicitation 1.2.2. Intermediate Representation 1.2.3. Executable Form Representation 1.3. Characteristics of Expert System 1.4. Examples of Expert System 1.4.1. Medical Diagnosis System 1.4.2. Game System 1.4.3. Financial Forecast/Advice System 1.4.4. Identification System 1.4.5. Water/Oil Drilling System 1.4.6. Car Engine Diagnosis System 1.5. Importance of Expert Systems 1.6. Conclusion 1.7. Summary 2. KNOWLEDGE ENGINEERING 2.1. Foundations of Knowledge Engineering 2.1.1. Knowledge Engineering Processes 2.1.1.1. Knowledge Acquisition 2.1.1.2. Knowledge Representation 2.1.1.3. Knowledge Verification and Validation 2.1.1.4. Inferencing 2.1.1.5. Explanation and Justification 2.1.2. Sources and Types of Knowledge 2.1.3. Levels and Categories of Knowledge 2.1.3.1. Shallow Level 2.1.3.2. Deep Level 2.1.3.3. Declarative Knowledge 2.1.3.4. Procedural Knowledge 2.1.3.5. Meta-knowledge 2.2. Knowledge Acquisition Methods 2.2.1. Knowledge Modelling Methods 2.3. Knowledge Verification and Validation 2.4. Knowledge Representation 2.4.1. Production Rules 2.4.2. Semantic Network 2.4.3. Frames 2.5. Inferencing 2.5.1. Common Sense Inferencing/Reasoning 2.5.2. Rule Base Inferencing/Reasoning 2.6. Explanation and Meta-knowledge 2.7. Inferencing with Uncertainty 2.8. Expert System Development Environment 2.8.1. Expert System Shells 2.8.2. Programming Languages 2.8.3. Hybrid Environment 2.9. Conclusion 2.10. Summary 3. PROPOSITIONAL LOGIC 3.1. Propositional Logic as Knowledge Representation Formalism 3.2. Syntax of Propositional Logic Connectives 3.3. Semantics of Propositional Logic 3.4. Automating Logical Reasoning 3.5. Uncertainty in Logical Reasoning 3.6. Automating Uncertain Propositional Logic 3.7. Conclusion 3.8. Summary CONCLUDING REMARKS REFERENCES Natural Language Processing Abstract 1. FUNDAMENTALS OF NATURAL LANGUAGE PROCESSING 1.1. Applications of Natural Language Processing 1.2. The Future of Natural Language Processing 1.3. Conclusion 1.4. Summary 2. TEXT PRE-ProcessING 2.1. Text Normalization 2.2. Tokenization 2.3. Stop Words Removal 2.4. Stemming 2.5. Lemmatization 2.6. Conclusion 2.7. Summary 3. TEXT REPRESENTATION 3.1. Bags of Words 3.2. Lookup Dictionary 3.3. One-Hot Encoding 3.4. Word Embedding 3.5. Conclusion 3.6. Summary 4. PARTS OF SPEECH TAGGING 4.1. Fundamentals of Parts of Speech 4.2. Importance of Parts of Speech Tagging 4.2.1. Word Pronunciation in Text to Speech Conversion 4.2.2. Word Sense Disambiguation 4.2.3. Stemming as Text Pre-processing Task 4.3. Computational Methods for Parts of Speech Tagging 4.3.1. Rule Based Tagging Method/Algorithm 4.3.2. Stochastic Based Tagging Method/Algorithm 4.3.3. Transformation Based Tagging 4.4. Conclusion 4.5. Summary 5. TEXT TAGGING/TEXT CLASSIFICATION 5.1. Approaches to Text Classification 5.1.1. Rule Based Text Classification 5.1.2. Machine Learning Based Text Classification 5.1.3. Rule and Machine Learning Based Text Classification 5.2. Machine Learning Algorithms for Text Classification 5.2.1 Naïve Bayes Text Classification Machine Learning Algorithm 5.2.2. Decision Tree Text Classification Machine Learning Algorithm 5.3. Conclusion 5.4. Summary 6. TEXT SUMMARIZATION 6.1. Brief History of Automatic Text Summarization 6.2. Approaches to Text Summarization 6.2.1. Extractive Text Summarization 6.2.2. Abstractive Text Summarization 6.3. Frequency Based Technique 6.4. Feature Based Technique 6.5. Text Rank Algorithm 6.6. Conclusion 6.7. Summary 7. sentiment analysis 7.1. Types of Sentiment Analysis 7.1.1. Fine Grained Sentiment Analysis 7.1.2. Emotion Detection Sentiment Analysis 7.1.3. Aspects Based Sentiment Analysis 7.1.4. Multi-Lingual Sentiment Analysis 7.1.5. Intent Detection Sentiment Analysis 7.2. Applications of Sentiment Analysis 7.2.1. Social Media Sentiment Analysis 7.2.2. Internet Sentiment Analysis 7.2.3. Sentiment Analysis on Customer Feedback 7.2.4. Sentiment Analysis on Customer Services 7.3. Approaches to Sentiment Analysis 7.3.1. Rule Based Approach 7.3.2. Machine Learning Based Approach 7.3.3. Hybrid Approach 7.4. Conclusion 7.5. Summary 8. NLP, USING PYTHON PROGRAMMING LANGUAGE 8.1. Fundamentals of NLP Using Python 8.1.1. Natural Language ToolKit (NLTK) 8.1.2. Getting Started with NLP Using Python 8.1.3. Using List in Python for NLP 8.1.4. Manipulating String in Python 8.1.5. Using Python Text Editor 8.2. Using Control Structures in Python for NLP 8.2.1. Selective Control Structure 8.2.2. Repetitive/Looping Control Structure 8.3. Accessing Text Corpora in Python 8.3.1. Gutenberg Corpus 8.3.2. Web and Chat Text 8.3.3. Brown Corpus 8.3.4. Reuters Corpus 8.3.5. Inaugural Address Corpus 8.4. Conclusion 8.5. Summary CONCLUDING REMARKS REFERENCES Machine Learning Abstract 1. INTRODUCTION TO MACHINE LEARNING 1.1. Fundamentals of Machine Learning 1.1.1. Definition of Machine Learning 1.1.2. Types of Learning 1.1.3. Basic Terminologies in Machine Learning 1.1.4. Components of a Machine Learning System 1.2. Input to Machine Learning System 1.3. Characteristics of Input Data 1.4. Output from Machine Learning System 1.4.1. Regression Equation 1.4.2. Regression Trees 1.4.3. Table 1.4.4. Cluster Diagram 1.4.5. Decision Tree 1.4.6. Classification Rule 1.5. Conclusion 1.6. Summary 2. DATA PREPARATION 2.1. Fundamentals of Data Preparation 2.1.1. Data Selection 2.1.2. Data Pre-processing 2.1.3. Data Transformation 2.2. Data Transformation Techniques 2.2.1. Feature Engineering 2.2.2. Feature Scaling 2.3. Conclusion 2.4. Summary 3. supervised machine learning 3.1. Prediction Based Machine Learning Algorithm 3.1.1. Simple Linear Regression Algorithm 3.1.1.1. Least Square Method of Simple Linear Regression 3.1.1.2. Simple Linear Regression Algorithm Based on Least Square Method 3.1.1.3. Illustrating the Use of Linear Regression Algorithm 3.1.2. Multiple Linear Regression Algorithm 3.1.2.1. Least Square Method for Linear Multiple Regression 3.1.2.2. Multiple Linear Regression Algorithm Based on Least Square Method 3.2. Classification Based Machine Learning Algorithm 3.2.1. Naïve Bayes Machine Learning Algorithm 3.2.1.1. Bayes Theorem 3.2.1.2. Illustrating the Use of Naïve Bayes Algorithm to Solve Classification Problem 3.2.2. Decision Tree Machine Learning Algorithm 3.2.2.1. Basic Terminologies Used in Decision Tree Algorithm 3.2.2.2. Outline of the Decision Tree Algorithm 3.2.2.3. Determining the Most Information Gain of Attributes by Visualization 3.2.2.4. Illustrating with Example 3.2.2.5. Computation of the Information Gain of Attribute by Formula 3.2.2.6. Illustrating the Computation of Information Gain with Example 3.2.2.7. Using Decision Tree to Solve Classification Problem 3.3. Conclusion 3.4. Summary 4. SIMPLE REGRESSION ALGORITHMS FOR NON-LINEAR RELATIONSHIPS 4.1. Types of Simple Non-Linear Relationships 4.1.1. Simple Non-Linear Relationships 4.1.2. Polynomial of Degree 2 with Minimum Point 4.1.3. Polynomial of Degree 2 with Maximum Point 4.1.4. Polynomial of Degree 3 with Minimum Point on the Right 4.1.5. Polynomial of Degree 3 with Maximum Point on the Right 4.2. Regression Algorithm for Non Lionear Relationships 4.2.1. Regression Algorithm for Simple Non-Linear Relationships 4.2.1.1. Example Illustrating the Use of the Regression Algorithm for Simple Non-Linear Relationship 4.2.2. Regression Algorithm for Polynomial of Degree 2 with Minimum Point 4.2.2.1. Example Illustrating the Use of Regression Algorithm for Polynomial of Degree 2 with Minimum Point 4.2.3. Regression Algorithm for Polynomial of Degree 2, with Maximum Point 4.2.3.1. Example Illustrating the Use of Regression Algorithm for Polynomial of Degree 2 with Maximum Point 4.2.4. Regression Algorithm for Polynomial of Degree 3, with Minimum Point on the Right 4.2.5. Regression Algorithm for Polynomial of Degree 3, with Maximum Point on the Right 4.3. Conclusion 4.4. Summary 5. UNSUPERVISED MACHINE LEARNING ALGORITHMS 5.1. Clustering Algorithms 5.1.1. K-means Clustering Algorithm 5.1.2. Using K-means Algorithm to Perform Clustering on Dataset 5.1.3. Choosing the Number of K Clusters 5.1.4. Using WEKA to Perform K-means Clustering on Dataset 5.2. Data Visualization 5.2.1. Visualizing Two Dimensional Linear Dataset Using Scatter Plot 5.2.2. Visualizing Probability Distribution of Dataset Using Scatter Plot 5.2.2.1. Binomial Probability Distribution Function 5.2.2.2. Poisson Probability Distribution Function 5.2.2.3. Exponential Probability Distribution Function 5.2.2.4. Normal Probability Distribution Function 5.3. Conclusion 5.4. Summary 6. WAIKATO ENVIRONMENT FOR KNOWLEDGE ANALYSIS, WEKA 6.1. Data Representation in WEKA 6.2. Getting Started with WEKA 6.2.1. Loading CSV Files in the WEKA Explorer 6.2. Using WEKA to Solve Machine Learning Problems 6.4. Using WEKA to Solve Simple Linear Regression Problem 6.5. Using WEKA to Solve Linear Regression on CPU.arff Dataset 6.6. Using WEKA to do Naïve Bayes Classification on Norminal Weather.arff Dataset 6.7. Conclusion 6.8. Summary 7. NEURAL NETWORK 7.1. Biological Neurons 7.1.1. How the Biological Neuron Works 7.2. Artificial Neural Network 7.2.1. Feedforward Multi-Layer Perceptron 7.2.2. Effect of Noise and Hardware Failure on the Artificial Neuron 7.2.3. Continuous Input and Output Signals of Artificial Neuron 7.2.4. Probabilistic Output Signal of Artificial Neuron 7.2.5. Training the Artificial Neural Network 7.2.5.1. Threshold Logic Unit as a Linear Classifier 7.2.5.2. Representing Logic Function/Gate Using Perceptron 7.2.5.3. Threshold Logic Unit as a Generalized Linear Classifier 7.2.5.4. Increasing the Dimension of the Input and Weight Vector by 1 7.2.5.5. The Perceptron Learning Algorithm 7.2.5.6. Gradient Descent Technique and Delta Rule 7.3. Back Propagation Algorithm 7.4. Using WEKA to Solve Artificial Neural Network Problem 7.5. Conclusion 7.6. Summary 8. DEEP LEARNING 8.1. Deep Feedforward Network 8.2. Application of Deep Feedforward Network 8.2.1. Application of Deep Learning to Logic Function Evaluation 8.3. Deep Convolutional Neural Network 8.3.1. Layers of Deep Convolutional Neural Network 8.3.1.1. Convolutional Layer 8.3.1.2. Pooling Layer 8.3.1.3. Full Connect Layer 8.4. Deep Recurrent Neural Network 8.5. Conclusion 8.6. Summary 9. REINFORCEMENT LEARNING 9.1. Introduction to Reinforcement Learning 9.2. Features of Reinforcement Learning 9.2.1. Trade-off between Exploitation and Exploration 9.2.2. Holistic Approach to Problem Solving 9.2.3. Goal of Agent is Central in Reinforcement Learning 9.2.4. Fruitful Interaction with Other Discipline 9.2.5. Evaluative Feedbacks 9.3. Elements of Reinforcement Learning 9.3.1. Agent 9.3.2. Environment 9.3.3. Action 9.3.4. Environment State 9.3.5. Policy 9.3.6. Reward Signal 9.3.7. Value Function 9.3.8. Time Step 9.3.9. Model of the Environment 9.4. History of Reinforcement Learning 9.5. Conclusion 9.6. Summary CONCLUDING REMARKS REFERENCES Machine Learning Applications Abstract 1. ANALYZING TERRORISM DATASET USING CLASSIFICATION BASED ALGORITHMS 1.1. Introduction 1.2. Methodology for Collection and Analysis of Terrorism Dataset 1.3. Design of the Two Machine Learning Algorithms 1.4. Naïve Bayes Algorithm 1.5. The Decision Tree Algorithm 1.6. Simulation, Results and Discussion 1.7. Simulation, Results and Discussion 2. ANALYZING TERRORISM DATASET USING PROBABILITY DISTRIBUTION FUNCTIONS 2.1. Methodology for Collection and Visualization of Terrorism Dataset 2.2. Theory of the Probability Distribution Functions 2.2.1. Binomial Probability Distribution Function 2.2.2. Poisson Probability Distribution Function 2.2.3. Exponential Probability Distribution Function 2.2.4. Normal Probability Distribution Function 2.3. Simulations, Results and Discussion 2.3.1. Result of Simulated Models for Binomial Probability Distribution Function 2.3.2. Result of Simulated Models for Poisson Probability Distribution Function 2.3.3. Result of Simulated Models for Normal Probability Distribution Function 2.4. Conclusion 3. POLYNOMIAL REGRESSION ALGORITHM FOR ANALYSING COVID-19 DATASET 3.1. Generalized Ordinary Least Square Method 3.2. Literature Review 3.3. Development of the Polynomial Regression Algorithm 3.3.1. Polynomial of Degree 2 with Minimum Point 3.3.2. Polynomial of Degree 2 with Maximum Point 3.3.3. Polynomial Dataset, of Degree 3 with Minimum Point on the Right 3.3.4. Polynomial Dataset, of Degree 3 with Maximum Point on the Right 3.3.5. Polynomial Dataset, of Degree n with Minimum Point on the Right 3.3.6. Polynomial Dataset, of Degree n with Maximum Point on the Right 3.4. Simulation and Discussion of Results 3.5. Conclusion CONCLUDING REMARKS REFERENCES Sensory Perception Abstract 1. COMPUTER VISION 1.1. Fundamentals of Computer Vision 1.2. Applications of Computer Vision 1.2.1. Vehicle Driver Assistance and Traffic Management 1.2.2. Eye and Head Tracker 1.2.3. File and Video for Sports Analysis 1.2.4. Film and Video for Sports Analysis 1.2.5. Gesture Recognition 1.2.6. General-Purpose Vision System 1.2.7. Industrial Automation and Inspection for Electronic Industry 1.2.8. Industrial Automation and Inspection for Agriculture Industry 1.3. History of Computer Vision 1.4. Image Formation 1.4.1. Geometry of Image 1.4.1.1. Two and Three-Dimensional Geometry 1.4.1.2. Two and Three-Dimensional Transformations 1.4.1.3. Types of Two and Three-Dimensional Transformations 1.4.1.4. Combined Transformation 1.5. Image Recognition 1.5.1. Object/Face Detection 1.5.1.1. Features Based Face Detection Technique 1.5.1.2. Appearance-Based Approach 1.5.1.3. Clustering and PCA. 1.5.1.4. Deep Neural Network 1.5.1.5. Support Vector Machine 1.5.1.6. Boosting 1.5.2. Pedestrian Detection 1.5.3. Face Recognition 1.5.4. Instance Recognition 1.6. Use of Computer Vision in Motion 1.7. Conclusion 1.8. Summary 2. SPEECH RECOGNITION 2.1. Basics of Speech Recognition 2.2. Basic Components of Speech Recognition System 2.3. Signal Processing 2.4. Uncertainties in Speech Recognition 2.5. Historical Development of Speech Recognition 2.6. Applications of Speech Recognition System 2.6.1. Cloud-based Call Center/IVR (Interactive Voice Response) 2.6.2. PC-Based Dictation/Command and Control 2.6.3. Device-Based Embedded Command Control 2.7. Conclusion 2.8. Summary 3. TACTILE SENSING 3.1. Tactile Sensing Explained 3.2. Justification for Tactile Sensing 3.3. Types of Tactile Sensors 3.4. Conclusion 3.5. Summary CONCLUDING REMARKS REFERENCES Robotics Abstract 1. FOUNDATIONS OF ROBOTICS 1.1. Robot Explained 1.2. Asimov Law of Robotics 1.3. Characteristics of Robot 1.4. User Level Applications of Robot 1.5. Types of Robots 1.6. Components of Robots 1.7. Conclusion 1.8. Summary 2. HUMANOID ROBOTS 2.1. Motivations for Humanoid Robots 2.2. Historical Development of Humanoid Robots 2.3. Current Trends in Humanoid Robots 2.4. Locomotion in Humanoid Robots 2.5. Manipulation in Humanoid Robots 2.6. Communication in Humanoid Robots 2.7. Conclusion 2.8. Summary 3. AUTONOMOUS/ROBOTIC VEHICLEs 3.1. Levels of Vehicles Automation 3.2. How Autonomous Vehicle Technology Works 3.3. History of Autonomous Vehicles 3.4. Benefits of Autonomous Vehicles 3.5. Development and Deployment of Autonomous Vehicles 3.6. Planning Implications for Autonomous Vehicles 3.7. Conclusion 3.8. Summary 4. metrics for assessing the performance of robots 4.1. Metrics for Navigational Tasks 4.2. Metrics for Perception Tasks 4.3. Metrics for Management Tasks 4.4. Metrics for Manipulation Tasks 4.5. Metrics for Social Tasks 4.6. Conclusion 4.7. Summary CONCLUDING REMARKS REFERENCES
Donate to keep this site alive
1. Disable the AdBlock plugin. Otherwise, you may not get any links.
2. Solve the CAPTCHA.
3. Click download link.
4. Lead to download server to download.