Artificial Intelligence in STEM Education: The Paradigmatic Shifts in Research, Education, and Technology

This document was uploaded by one of our users. The uploader already confirmed that they had the permission to publish it. If you are author/publisher or own the copyright of this documents, please report to us by using this DMCA report form.

Simply click on the Download Book button.

Yes, Book downloads on Ebookily are 100% Free.

Sometimes the book is free on Amazon As well, so go ahead and hit "Search on Amazon"

Artificial intelligence (AI) opens new opportunities for STEM education in K-12, higher education, and professional education contexts. This book summarizes AI in education (AIED) with a particular focus on the research, practice, and technological paradigmatic shifts of AIED in recent years.

The 23 chapters in this edited collection track the paradigmatic shifts of AIED in STEM education, discussing how and why the paradigms have shifted, explaining how and in what ways AI techniques have ensured the shifts, and envisioning what directions next-generation AIED is heading in the new era. As a whole, the book illuminates the main paradigms of AI in STEM education, summarizes the AI-enhanced techniques and applications used to enable the paradigms, and discusses AI-enhanced teaching, learning, and design in STEM education. It provides an adapted educational policy so that practitioners can better facilitate the application of AI in STEM education.

This book is a must-read for researchers, educators, students, designers, and engineers who are interested in the opportunities and challenges of AI in STEM education.

Author(s): Fan Ouyang, Pengcheng Jiao, Bruce M. McLaren, Amir H. Alavi
Series: Chapman & Hall/CRC Artificial Intelligence and Robotics Series
Publisher: CRC Press
Year: 2022

Language: English
Pages: 395
City: Boca Raton

Cover
Half Title
Series Page
Title Page
Copyright Page
Table of Contents
List of Contributors
Editor Biographies
Section I AI-Enhanced Adaptive, Personalized Learning
Chapter 1 Artificial Intelligence in STEM Education: Current Developments and Future Considerations
1.1 Introduction
1.2 Paradigmatic Shifts of AI in STEM Education
1.2.1 Paradigm One: AI-Directed STEM Education
1.2.2 Paradigm Two: AI-Supported STEM Education
1.2.3 Paradigm Three: AI-Empowered STEM Education
1.3 Discussion and Future Considerations
1.4 Structure of the Book
1.5 Conclusions
Acknowledgment
References
Chapter 2 Towards a Deeper Understanding of K-12 Students’ CT and Engineering Design Processes
2.1 Introduction
2.2 Background and Motivation
2.2.1 21st-Century K-12 Classrooms
2.2.2 Understanding Students’ STEM Learning Strategies
2.2.2.1 Computational Modeling Strategies
2.2.2.2 Engineering Design Strategies
2.3 SPICE Curriculum and Learning Environment
2.3.1 System Design Perspectives
2.3.1.1 Evidence-Centered Design
2.3.1.2 Coherence across Modeling Representations
2.3.1.3 Domain-Specific Modeling Languages
2.3.2 SPICE Learning Trajectory and Curriculum
2.4 Methods
2.4.1 Implementation
2.4.2 Data Sources
2.4.3 Analysis Methods
2.5 Results
2.5.1 The Role of Computational Thinking
2.5.1.1 Effects on Computational Modeling
2.5.1.2 Effects on Engineering Design
2.5.1.3 Effects on Posttest Scores
2.5.2 Learning with Multiple Representations
2.5.3 Impact of Strategy Use on Learning
2.5.3.1 Computational Modeling Strategies
2.5.3.2 Engineering Design Strategies
2.6 Discussion and Future Directions
Notes
Bibliography
Chapter 3 Intelligent Science Stations Bring AI Tutoring into the Physical World
3.1 Introduction
3.2 Intelligent Science Stations: A Guided AI Mixed-Reality Interactive Experience
3.2.1 Physical Setup and AI Computer Vision Technology
3.2.2 Scenario
3.3 AI in the 3D Physical World versus on Flat Screen
3.4 AI Guidance Matters: Making Science Exhibits Yield Better Learning and More Engagement
3.5 The Kind of AI Guidance Matters: Scaffolded Science Inquiry Produces Better Engineering
3.6 Discussion and Next Steps
Note
References
Chapter 4 Adaptive Support for Representational Competencies during Technology-Based Problem-Solving in STEM
4.1 Introduction
4.2 Literature Review
4.2.1 Sense-Making Competencies
4.2.2 Perceptual Fluency
4.3 Empirical Studies
4.3.1 Combining Sense-Making Support and Perceptual-Fluency Support
4.3.2 How Should Sense-Making and Perceptual-Induction Activities Be Sequenced?
4.3.3 Should Representational-Competency Supports Adapt to Students’ Current Knowledge Level?
4.4 Discussion
Note
References
Chapter 5 Teaching STEM Subjects in Non-STEM Degrees: An Adaptive Learning Model for Teaching Statistics
5.1 Introduction
5.2 State of the Art
5.2.1 A Hierarchical Structure for Statistical Knowledge
5.2.2 Knowledge Space Theory
5.2.3 Dublin Descriptors
5.3 Methodology
5.3.1 Multidimensional IRT Model
5.3.2 Archetypal Analysis
5.4 Implementation
5.4.1 App Interface and Workflow
5.4.2 User Model and Data Processing
5.5 Simulation Data Results
5.5.1 Design of the Study
5.5.2 Results
5.5.3 Preliminary Application on Real-world Data
5.6 Discussion
5.7 Limitations and Future Work
5.8 Conclusion
References
Chapter 6 Removing Barriers in Self-Paced Online Learning through Designing Intelligent Learning Dashboards
6.1 Introduction
6.1.1 Intelligent Learning Dashboards
6.1.2 Automatic Generation of Formative Assessments
6.1.3 Automatic Analysis of Students’ Engagement
6.2 Overview of the Intelligent Learning Dashboard Framework
6.3 Adaptive Formative Assessments
6.4 Automatic Detection of Students’ Engagement
6.5 Learning Analytic Dashboards
6.6 Conclusions and Future Work
Acknowledgment
References
Section II AI-Enhanced Adaptive Learning Resources
Chapter 7 PASTEL: Evidence-based Learning Engineering Methods to Facilitate Creation of Adaptive Online Courseware
7.1 Introduction
7.2 CyberBook
7.3 PASTEL
7.3.1 WATSON
7.3.2 SMART
7.3.3 QUADL
7.3.4 RAFINE
7.3.5 RADARS
7.4 Evaluation
7.4.1 SMART
7.4.2 QUADL
7.4.3 RAFINE
7.4.4 RADARS
7.5 Conclusion
Notes
References
Chapter 8 A Technology-Enhanced Approach for Locating Timely and Relevant News Articles for Context-Based Science Education
8.1 Literature Review
8.1.1 Science News Articles and Assessment in Context-Based Science Education
8.1.2 Automated Curriculum Alignment of Learning Resources in Science Education
8.1.3 Topic Modeling and Latent Dirichlet Allocation
8.1.4 Present Study
8.2 Methods
8.2.1 Data
8.2.1.1 Stage 1: Science Article Preprocessing and Vectorization
8.2.1.2 Stage 2: Science Article Topic Modeling
8.2.1.3 Stage 3: Topic Structure Prediction and Evaluation of the Units of Study
8.2.1.4 Stage 4: Science Article Recommendation Based on Topic Similarity
8.3 Results
8.3.1 Science Article Topic Analysis
8.3.2 Topic Prediction and Topic Evaluation Units of Study
8.3.3 Science Article Recommendation and Item Generation
8.4 Conclusions and Discussion
Appendix A
Appendix B
Appendix C
Appendix D
Notes
References
Chapter 9 Adaptive Learning Profiles in the Education Domain
9.1 Introduction
9.2 Academic Analytics
9.2.1 Decision Support in Education: Loyalty and Dropout
9.2.2 Academic Analytics at Politecnico di Torino
9.2.3 Research Questions
9.2.4 The Information System Course
9.2.4.1 Framework
9.2.4.2 Classroom
9.2.4.3 Course Delivery Organization
9.2.4.4 Assessment
9.2.4.5 Course Management: Student Behavior and Assessment
9.3 Academic Analytics Platform
9.4 Information Systems Course Data
9.4.1 Dataset
9.4.2 Dataset Initial Exploration
9.4.3 Clustering Algorithm Overview and Benchmark
9.4.3.1 DBSCAN
9.4.3.2 k-Means and k-Medoids
9.4.4 Performance-Based Clustering Visualization
9.4.5 Association Rules
9.4.5.1 Cluster 0: Exam Passed – ‘Advanced’ Python/UML Section, ‘Intermediate’ MC Section, and ‘Intermediate’ Project Work
9.4.5.2 Cluster 1: Exam Passed – ‘Advanced’ Python/UML Section, ‘Expert’ MC Section, and ‘Intermediate’ Project Work
9.4.5.3 Cluster 2: Exam Failed –‘Basic’ MC Section, ‘Being Developed’ Python/UML Section, and No Project Work
9.4.5.4 Cluster 3: Exam Passed/Failed –‘Intermediate’ Python/UML Section, ‘Advanced’ MC Section, and ‘Intermediate’ Project Work
9.4.5.5 Cluster 4: Exam Failed – ‘Being Developed’ Python/UML and MC Sections and ‘Intermediate’ Project Work
9.4.5.6 Cluster 5: Exam Failed – ‘Being Developed’ Python/UML and MC Sections and No Project Work
9.4.5.7 Cluster 6: Exam Passed/Failed – All ‘Intermediate’ Sections
9.4.5.8 Cluster 7: Exam Passed – ‘Expert’ Python/UML and MC Sections and 'Intermediate’/‘Advanced’ Project Work
9.4.5.9 Cluster 8: Exam Failed – ‘Being Developed’ Python/UML Section, ‘Basic’/ ‘Intermediate’ MC Section, and ‘Intermediate’ Project Work
9.4.6 Discussion on the Results
9.4.7 Research Questions and Answers
9.5 Conclusions and Future Work
References
Section III AI-Supported Instructor Systems and Assessments for AI and STEM Education
Chapter 10 Teacher Orchestration Systems Supported by AI: Theoretical Possibilities and Practical Considerations
10.1 Classroom Orchestration
10.2 Artificial Intelligence for Classroom Orchestration
10.3 Research Context
10.4 The Role of AI in Orchestration Assistant Design
10.5 Classroom Context Interviews
10.6 Orchestration Assistant Design
10.6.1 During Class versus Beyond Class Assistance
10.6.2 Automated Systems versus Teacher Agency
10.6.3 Detailed Information versus Actionable Information
10.6.4 Classroom Management versus Ambitious Learning Practices
10.7 Discussion and Future Research
References
Chapter 11 The Role of AI to Support Teacher Learning and Practice: A Review and Future Directions
11.1 Background
11.1.1 Teacher Learning Perspectives
11.1.2 Technologies for Teacher Learning
11.1.2.1 AI-Based Teacher Tools
11.1.3 Design Guidance for AIEd Tools
11.1.4 The ICAP Framework
11.2 ICAP and AI-Based Technologies for Teacher Learning
11.2.1 Passive AIEd Technologies for Teacher Learning
11.2.2 Active AIEd Technologies for Teacher Learning
11.2.3 Constructive AIEd Technologies for Teacher Learning
11.2.4 Interactive AIEd Technologies for Teacher Learning
11.3 Discussion
11.4 Limitations
11.5 Conclusion
Acknowledgment
References
Chapter 12 Learning Outcome Modeling in Computer-Based Assessments for Learning
12.1 Psychometric Measurement for Learning Outcome Modeling
12.1.1 Classical Test Theory
12.1.2 Item Response Theory
12.1.3 Cognitive Diagnosis
12.2 Bayesian Networks
12.3 Bayesian Knowledge Tracing
12.4 Additive Factors Model
12.5 Deep Learning for Learning Outcome Modeling
12.5.1 Deep Knowledge Tracing
12.5.2 Other Deep Learning Approaches for Learning Outcome Modeling
12.6 Collaborative Filtering for Learning Outcome Modeling
12.6.1 Matrix Factorization
12.6.2 Collaborative Filtering-Based Approaches for Learning Outcome Modeling
12.6.3 Deep Learning–Based Collaborative Filtering
12.7 An Overview of Approaches for Learning Outcome Modeling
12.8 Conclusion
References
Chapter 13 Designing Automated Writing Evaluation Systems for Ambitious Instruction and Classroom Integration
13.1 Introduction
13.1.1 Overview of eRevise System
13.1.2 Chapter Overview
13.2 Considerations for Automated Writing Evaluation (AWE) System Design
13.2.1 Authentic Tasks That Aid Ambitious Teaching
13.2.2 Features of a Construct Are Identified and Named
13.2.3 Information Is at a ‘Grain Size’ to Support Learning
13.2.4 Underlying Algorithms Are Fair
13.2.5 System Supports Instructional Interactions and Discourse
13.3 Designing to Facilitate Widespread Adoption of AWE Systems and Integration in Classroom Routines
13.3.1 Education Policies
13.3.2 Values and Goals Held by School Leaders
13.3.3 Values and Goals Held by Teachers
13.4 Concluding Thoughts
References
Section IV Learning Analytics and Educational Data Mining in AI and STEM Education
Chapter 14 Promoting STEM Education through the Use of Learning Analytics: A Paradigm Shift
14.1 Introduction
14.2 Learning Analytics as an Enabler for a Paradigm Shift in STEM Education
14.2.1 The Advancement of Data Collection Techniques
14.2.2 The Expansion of Data Analysis Methods
14.2.3 The Emergence of Innovative Visualization Strategies
14.3 Facilitating Change in STEM Education with Theory-Driven Learning Analytics
14.4 Case Example: Examining Students’ STEM Learning with SRL Theory
14.4.1 Theory-Driven Data Collection
14.4.2 Theory-Driven Data Analysis and Visualization
14.4.3 Theory-Driven Interpretation of Results
14.5 The Future of STEM Education with Learning Analytics
References
Chapter 15 Using Learning Analytics to Understand Students’ Discourse and Behaviors in STEM Education
15.1 Introduction
15.2 STEM Education
15.3 Technology-Enhanced Environments to Support STEM Education
15.3.1 Teaching Teamwork for Collaborative Problem-Solving
15.3.2 Energy3D for Engineering Design
15.4 Learning Analytics to Analyze Generated Data
15.4.1 Analytical Methods
15.5 Case Studies
15.5.1 Science
15.5.1.1 Transformative and Non-Transformative Discourse
15.5.1.2 Self-Regulation
15.5.1.3 Multi-Faceted Engagement
15.5.2 Engineering
15.5.3 Mathematics
15.6 Research Gaps and Future Trends
15.7 Conclusion
References
Chapter 16 Understanding the Role of AI and Learning Analytics Techniques in Addressing Task Difficulties in STEM Education
16.1 Introduction
16.2 The Multifaceted Aspects of Task Difficulty
16.3 LA and AI in Digital Learning Environments
16.3.1 Virtual Labs
16.3.2 Serious Games and Simulations
16.4 Opportunities
16.4.1 Real-Time Feedback and Intervention
16.4.2 Personalized Adaptive Learning
16.4.3 Leveraging the Advances in Sensor Technology
16.5 Challenges
16.5.1 Task Difficulty and Its Operationalization
16.5.2 Data Ethics and Student Privacy
16.5.3 What Data to Collect and How
16.5.4 Confounding Factors
16.6 Conclusion
Note
References
Chapter 17 Learning Analytics in a Web3D Based Inquiry Learning Environment
17.1 Introduction
17.1.1 Learning Analytics
17.1.2 Web3D Technology
17.1.2.1 What Is Web3D
17.1.2.2 Implementation Technologies of Web3D
17.1.3 Web3D-Based Inquiry Learning Environment
17.1.4 Web3D and Learning Analytics
17.2 Theoretical Basis of Learning Analytics
17.2.1 Knowledge Graph
17.2.1.1 The Origin of the Knowledge Graph
17.2.1.2 Educational Knowledge Graph
17.2.1.3 Theoretical Basis of Educational Knowledge Graph
17.2.1.4 Principles of Educational Knowledge Graph Construction
17.2.2 Learner Profile
17.2.2.1 What Is the Learner Profile
17.2.2.2 Research Status of Learner Profile
17.2.3 Theory and Record of Learning Behavior
17.2.3.1 Theoretical Basis
17.2.3.2 Standards for Recording Behavior Data
17.3 Technical Path of Learning Analytics
17.3.1 Realization Methods of Knowledge Graph
17.3.1.1 Construction of Educational Knowledge Graph
17.3.1.2 Storage and Presentation of Educational Knowledge Graph
17.3.2 How to Create a Learner Profile
17.3.2.1 Knowledge Level Assessment
17.3.2.2 Learning Behavior Assessment
17.3.2.3 Emotion and Attitude Assessment
17.3.3 Definition and Analytics Method of Learning Behavior
17.3.3.1 Definition of Learning Behavior Analytics
17.3.3.2 Analytics Method of Learning Behavior
17.4 Concrete Realization of Learning Analysis in Web3D Based Inquiry Learning Environment
17.4.1 Construction of Knowledge Graph of Chemistry Experiments in Middle School
17.4.2 Generation and Presentation of Learner Profile in Web-Based Inquiry Learning Environment
17.4.3 Analysis and Evaluation of Inquiry Learning Behavior
17.5 Summary and Outlook
References
Chapter 18 On Machine Learning Methods for Propensity Score Matching and Weighting in Educational Data Mining Applications
18.1 Introduction
18.2 Methods
18.2.1 Propensity Score
18.2.1.1 Assumptions for Propensity Score-Based Methods
18.2.1.2 Propensity Score Matching
18.2.1.3 Inverse Probability of Treatment Weighting
18.2.2 Random Forest
18.2.3 Ensemble Learning
18.3 Simulation Study
18.3.1 Data Generation
18.3.1.1 Generating Covariates (X)
18.3.1.2 Generating Treatment Assignment Indicator (Z)
18.3.1.3 Generating Outcome (Y)
18.3.2 Simulation Study Results
18.4 Students’ Success Case Study
18.5 Discussion
Acknowledgment
References
Chapter 19 Situating AI (and Big Data) in the Learning Sciences: Moving toward Large-Scale Learning Sciences
19.1 Introduction
19.2 A Landscape of Learning Sciences and Some Challenges It Faces
19.2.1 Outcomes
19.2.2 Student Factors
19.2.3 Contextual Factors
19.2.4 Replication Crisis? (Or Maybe Context Matters)
19.3 AI and Its Affordances for the Learning Sciences
19.3.1 Deep Student Models
19.3.2 Causal Learning Outcome Models
19.3.3 Natural Language Processing
19.3.4 Sensor-Free Student Factor Measures
19.3.5 Instructional Policy Learning
19.4 Promoting Equity
19.5 Conclusion
Acknowledgments
References
Chapter 20 Linking Natural Language Use and Science Performance
20.1 Introduction
20.1.1 Language Ability and Science Scores
20.1.2 Computer-Based Science Education
20.1.3 The Current Study
20.2 Method
20.2.1 Procedure
20.2.2 Participants
20.2.3 Data Collected
20.2.3.1 Pretest/Posttest Assessments
20.2.3.2 Individual Differences Data
20.2.3.3 Verbal Data
20.2.4 Transcriptions
20.2.5 Linguistic Variables
20.2.5.1 TAALES
20.2.5.2 TAACO
20.2.5.3 SEANCE
20.2.6 Statistical Analysis
20.3 Results
20.3.1 Non-Linguistic Model
20.3.2 Linguistic Model
20.3.3 Full Model
20.4 Discussion
20.5 Conclusion
Acknowledgments
References
Section V Other Topics in AI and STEM Education
Chapter 21 Quick Red Fox: An App Supporting a New Paradigm in Qualitative Research on AIED for STEM
21.1 Introduction
21.2 Design
21.2.1 Interview Triggers
21.2.2 Server-Side Platform
21.2.3 Client Side
21.2.3.1 Set-Up and Login
21.2.3.2 Presentation of Student and Trigger Information
21.2.3.3 Interview Recordings and Notes
21.2.3.4 Moving On (Next, Skip, End)
21.2.3.5 Data
21.3 Case Study
21.3.1 Betty’s Brain
21.3.2 Developing Interview Triggers
21.3.3 Procedure
21.3.4 Data
21.3.5 Data Coding
21.3.6 Impact on Scholarly Work
21.4 General Discussion and Conclusions
21.4.1 Summary
21.4.2 Applications
21.4.3 Limitations
21.4.4 Future Development
Acknowledgments
Note
References
Chapter 22 A Systematic Review of AI Applications in Computer-Supported Collaborative Learning in STEM Education
22.1 Introduction and Review of Previous Work
22.2 Methodology
22.2.1 Research Purpose and Questions
22.2.2 Searching the Literature
22.2.3 Selecting Studies
22.2.4 Data Extraction
22.2.5 Data Synthesis and Analysis
22.3 Results
22.3.1 RQ1: The Overall Trend
22.3.1.1 Type of Research
22.3.1.2 Educational Contexts
22.3.1.3 Research Focus
22.3.1.4 Research Methods
22.3.2 RQ2: AI Techniques Used to Support Group Formation
22.3.2.1 Group Size and Grouping Mechanisms
22.3.2.2 AI Techniques and Evaluation
22.3.3 RQ3: AI-Enabled Systems to Support Student Interactions
22.3.3.1 Student Interaction Design
22.3.3.2 Student Interaction Evaluation
22.4 Discussions and Implications
22.4.1 Overall Trend of AI Applications in CSCL Research
22.4.1.1 Type of Research Paper
22.4.1.2 Research Focus
22.4.1.3 Educational Contexts
22.4.1.4 Research Design
22.4.2 AI Applications to Support Group Formation
22.4.2.1 Issues of Design
22.4.2.2 Issues of Evaluation
22.4.3 AI Applications to Support Student Interactions
22.4.3.1 Issues of Design
22.4.3.2 Issues of Evaluation
22.5 Conclusions
References
Chapter 23 Inclusion and Equity as a Paradigm Shift for Artificial Intelligence in Education
23.1 Inclusion and Equity as a Paradigm Shift for Artificial Intelligence in Education
23.2 AI and DEI: A Bidirectional Relationship
23.2.1 AI for DEI: How Can the Principles and Methods of Artificial Intelligence Support Diversity, Equity, and Inclusion?
23.2.2 Inclusivity in STEM Introductory Courses
23.2.2.1 Performance in Introductory STEM Courses
23.2.2.2 Active Learning in Introductory STEM Courses
23.2.3 Collaboration and Discourse
23.2.4 Learning Assessments
23.2.5 DEI for AI: How Can the Principles of Diversity, Equity, and Inclusion Transform Artificial Intelligence in Education?
23.2.6 Person-Centered Variables, Outcomes, and Ownership
23.2.7 Revealing, Mitigating, and Preventing Biases in Analysis and Interpretation
23.2.8 Transparency in Feedback, and Dissemination
23.2.9 Considering Who Will Use the AI
23.2.10 Educational Opportunities for AI
23.3 Ethics and Challenges
23.4 Conclusion
Acknowledgments
Note
References
Index