{"id":203423,"date":"2016-05-13T01:43:11","date_gmt":"2016-05-13T05:43:11","guid":{"rendered":"http:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/uncategorized\/principles-of-artificial-intelligence-study-guide.php"},"modified":"2016-05-13T01:43:11","modified_gmt":"2016-05-13T05:43:11","slug":"principles-of-artificial-intelligence-study-guide","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/principles-of-artificial-intelligence-study-guide.php","title":{"rendered":"Principles of Artificial Intelligence: Study Guide"},"content":{"rendered":"<p><p>Course Information                                Course Materials                                AI Resources                                Quick Links                                                <\/p>\n<p>            Principles of Artificial Intelligence:            Study Guide          <\/p>\n<p>            Modeling dependence between attributes. The decision            tree classifier. Introduction to information theory.            Information, entropy, mutual information, and related            concepts (Kullback-Liebler divergence).          <\/p>\n<p>            Algorithm for learning decision tree classifiers from            data. The relationship between MAP hypothesis learning,            minimum description length principle (Occam's razor)            and the role of priors.          <\/p>\n<p>            Ovrfitting and methods to avoid overfitting -- dealing            with small sample sizes; prepruning and post-pruning.            Pitfalls of entropy as a splitting criterion for            multi-valued splits. Alternative splitting strategies            -- two-way versus multi-way splits; Alternative split            criteria: Gini impurity, Entropy, etc. Cost-sensitive            decision tree induction -- incorporating attribute            measurement costs and misclassification costs into            decision tree induction.          <\/p>\n<p>            Dealing with categorical, numeric, and ordinal            attributes. Dealing with missing attribute values            during tree induction and instance classification.          <\/p>\n<p>            Evaluation of classifiers. Accuracy, Precision, Recall,            Correlation Coefficient, ROC curves.          <\/p>\n<p>            Required Readings          <\/p>\n<p>            Recommended Readings          <\/p>\n<p>              Introduction to Artificial Neural Networks and Linear              Discriminant Functions. Threshold logic unit              (perceptron) and the associated hypothesis space.              Connection with Logic and Geometry. Weight space and              pattern space representations of perceptrons. Linear              separability and related concepts. Perceptron              Learning algorithm and its variants. Convergence              properties of perceptron algorithm. Winner-Take-All              Networks.            <\/p>\n<p>              Bayesian Recipe for function approximation and Least              Mean Squared (LMS) Error Criterion. Introduction to              neural networks as trainable function approximators.              Function approximation from examples. Minimization of              Error Functions. Derivation of a Learning Rule for              Minimizing Mean Squared Error Function for a Simple              Linear Neuron. Momentum modification for speeding up              learning. Introduction to neural networks for              nonlinear function approximation. Nonlinear function              approximation using multi-layer neural networks.              Universal function approximation theorem. Derivation              of the generalized delta rule (GDR) (the              backpropagation learning algorithm).            <\/p>\n<p>              Generalized delta rule (backpropagation algorithm) in              practice - avoiding overfitting, choosing neuron              activation functions, choosing learning rate,              choosing initial weights, speeding up learning,              improving generalization, circumventing local minima,              using domain-specific constraints (e.g., translation              invariance in visual pattern recognition), exploiting              hints, using neural networks for function              approximation and pattern classification.              Relationship between neural networks and Bayesian              pattern classification. Variations -- Radial basis              function networks. Learning non linear functions by              searching the space of network topologies as well as              weights.            <\/p>\n<p>              Lazy Learning Algorithms. Instance based Learning,              K-nearest neighbor classifiers, distance functions,              locally weighted regression. Relative advantages and              disadvantages of lazy learning and eager learning.            <\/p>\n<p>              Additional Information            <\/p>\n<p>                  The material to be covered each week and the                  assigned readings (along with online lecture                  notes, if available) are included on this page.                  The study guide (including slides, notes,                  readings) will be updated each week. The assigned                  readings are divided into required and                  recommended readings and notes from recitations                  (if available). You will be responsible for the                  material covered in the lectures and the assigned                  required readings. You are strongly encouraged to                  explore the recommended readings.                <\/p>\n<p>                    Overview of the course; Overview of artificial                    intelligence: What is intelligence? What is                    artificial intelligence (AI)? History of AI;                    Working hypothesis of AI. Introduction to                    intelligent agents. Intelligent agents defined.                    Taxonomy of agents. Simple reflex agents                    (memoryless agents); agents with limited                    memory; rational agents; agents with goals;                    utility-driven Agents.                  <\/p>\n<p>                    You may skip most of these readings if you have                    prior programming experience in Java.                  <\/p>\n<p>                      Goal-Based Agents. Problem-solving as state                      space search. Formulation of state-space                      search problems. Representing states and                      actions. Basic search algorithms and their                      properties: completeness, optimality, space                      and time complexity. Breadth-first search,                      depth-first search, backtracking search,                      depth-limited and interative deepening                      search.                    <\/p>\n<p>                      Heuristic search. Finding optimal solutions.                      Best first search. A* Search: Adding                      Heuristics to Branch and Bound Search.                      Completeness, Admissibility, and Optimality                      of the A* algorithm. Design of admissible                      heuristic functions. Comparison of heuristic                      functions (\"informedness\" of heuristics).                    <\/p>\n<p>                      Problem Solving through Problem Reduction.                      Searching AND-OR graphs. A*-like admissible                      algorithm for searching AND-OR graphs.                    <\/p>\n<p>                      Problem solving as Constraint Satisfaction.                      Properties of constraint satisfaction                      problems. Examples of constraint satisfaction                      problems. Iterative instantiation method for                      solving CSPs. Scene interpretation as                      constraint propagation (Waltz's line labeling                      algorithm). Node consistency, arc                      consistency, and related algorithms.                    <\/p>\n<p>                      Stochastic search: Metropolis Algorithm,                      Simulated Annealing, Genetic Algorithms.                    <\/p>\n<p>                      Introduction to Knowledge Representation.                      Logical Agents with explicit knowledge                      representation. Knowledge representation                      using propositional logic; Review of                      Propositional Logic: Propositional logic as a                      knowledge representation language: Syntax and                      Semantics; Possible worlds interpretation;                      Models and Logical notions of Truth and                      Falsehood; Logical Entailment; Inference                      rules; Modus ponens; Soundness and                      Completeness properties of inference. Modus                      Ponens is a sound inference rule for                      Propositional logic, but is not complete.                      Extending modus ponens - the resolution                      principle.                    <\/p>\n<p>                      Logical Agents without explicit                      representation. Comparison of logical agents                      with and without explicit representations.                    <\/p>\n<p>                      FOPL (First-Order Predicate Logic).                      Ontological and epistemological commitments                      and Syntax and semantics of FOPL. Examples.                      Theorem-proving in FOL. Unification,                      instantiation, and entailment.                    <\/p>\n<p>                      Transformation of FOPL sentences in Clause                      Normal Form. Resolution by refutation for                      First Order Predicate Logic. Examples.                      Automated Theorem Proving. Search Control                      Strategies for Theorem Proving. Unit                      Preference, Set of Support and related                      approaches. Soundness and Completeness of                      Proof Procedures. Semidecidability of FOPL                      and its implications. Brief discussion of                      Datalog (for deductive databases) and Prolog                      (for logic programming).                    <\/p>\n<p>                      Emerging Applications of Knowledge                      Representation.. Semantics-Driven                      Applications. Ontologies. Information                      Integration. Service Oriented Computing.                      Semantic Web. Brief overview of Ontology                      Languages: RDF, OWL. Description Logics -                      Syntax, Semantics, and Inference.                    <\/p>\n<p>                      Representing and Reasoning Under Uncertainty.                      Review of elements of probability.                      Probability spaces. Bayesian (subjective)                      view of probability. Probabilities as                      measures of belief conditioned on the agent's                      knowledge. Axioms of probability. Conditional                      probability. Bayes theorem. Random Variables.                      Independence. Probability Theory as a                      generalization of propositional logic. Syntax                      and Semantics of a Knowledge Representation                      based on probability theory. Sound inference                      procedure for probabilistic reasoning.                    <\/p>\n<p>                      Independence and Conditional Independence.                      Exploiting independence relations for compact                      representation of probability distributions.                      Introduction to Bayesian Networks. Semantics                      of Bayesian Networks. D-separation.                      D-separation examples. Answering Independence                      Queries Using D-Separation tests.                    <\/p>\n<p>                      Probabilistic Inference Using Bayesian                      Networks. Exact Inference Algorithms -                      Variable Elimination Algorithm; Message                      Passing Algorithm; Junction Tree Algorithm.                      Complexity of Exact Bayesian Network                      Inference. Approximate inference using                      stochastic simulation (sampling, rejection                      sampling, and liklihood weighted sampling                    <\/p>\n<p>                    Making Simple Decisions under uncertainty,                    Elements of utility theory, Constraints on                    rational preferences, Utility functions,                    Utility elicitation, Multi-attribute utility                    functions, utility independence, decision                    networks, value of information                  <\/p>\n<p>                    Mid term examination                  <\/p>\n<p>                    Sequential Decision Problems. Markov Decision                    Processes. Value Iteration. Policy Iteration.                    Partially Observable MDPs.                  <\/p>\n<p>                    Markov Decision Processes and Sequential                    Decision Problem.                  <\/p>\n<p>                    Reinforcement Learning. Agents that learn by                    exploring and interacting with environments.                    Examples of reinforcement learning scenario.                    Markov decision processes. Types of                    environments (e.g., deterministic versus                    stochastic state transition functions and                    reward functions, stationary versus                    non-stationary environments, etc.).                  <\/p>\n<p>                    The credit assignment problem. The exploration                    vs. exploitation dilemma. Value Iteration                    algorithm. Policy Iteration algorithm.                    Q-learning Algorithm, Confergence of                    Q-learning. Temporal Difference Learning                    Algorithms.                  <\/p>\n<p>                    Recommended readings                  <\/p>\n<p>                    Additional Information                  <\/p>\n<p>                  Overview of machine learning. Why should machines                  learn? Operational definition of learning.                <\/p>\n<p>                  Bayesian Decision Theory. Optimal Bayes                  Classifier. Minimum Risk Bayes Classifier.                <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>The rest is here: <\/p>\n<p><a target=\"_blank\" href=\"http:\/\/www.cs.iastate.edu\/~cs572\/studyguide.html\" title=\"Principles of Artificial Intelligence: Study Guide\">Principles of Artificial Intelligence: Study Guide<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Course Information Course Materials AI Resources Quick Links Principles of Artificial Intelligence: Study Guide Modeling dependence between attributes. The decision tree classifier. Introduction to information theory <a href=\"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/artificial-intelligence\/principles-of-artificial-intelligence-study-guide.php\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"limit_modified_date":"","last_modified_date":"","_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[13],"tags":[],"class_list":["post-203423","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence"],"modified_by":null,"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/203423"}],"collection":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/comments?post=203423"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/posts\/203423\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/media?parent=203423"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/categories?post=203423"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/futurist-transhuman-news-blog\/wp-json\/wp\/v2\/tags?post=203423"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}