eLearning Object Review #3: Analysis, assessment, and ID review

I found this object by googling elearning scenario branching example "storyline". I found a few that were over 7 years old and were no longer supported by flash, so I decided to search for something more recent. I specified that I wanted to only view results from the last five years. I liked this article because it included links to examples, including one of my favorite examples, Haji Kamal. The description of “A Support Net” specified that it was a “choose your own adventure” style scenario eLearning, which interested me.


Analysis

What workplace performance does this scenario-based e-learning support? (Clark 2013, ch 1)

  • Accelerate expertise

  • Build critical thinking skills

  • Build skills impossible/impractical to gain on the job performance

  • Promote learning transfer

  • Gain a return-on-investment

  • Motivate learning

  • Exploit technological resources effectively

  • Engage a target audience that already has basic job familiarity

What are the instructional goals? (Clark 2013, ch 1)

This course is part of a larger curriculum, “Making Sense of Mental Health Problems,” designed to help social workers engage and understand mental health problems. These modules encourage learners to explore all possible causes and influences in a mental health diagnosis and allow learners to practice some challenges they may encounter in the field.

Who are the learners? (Clark 2013, ch 4)

  • Novice

  • Some experience

  • Apprentice

  • Experienced

  • Mixed

  • Other

What are the scenario-learning domain(s)? (Clark 2013, ch 2)

  • Interpersonal skills

  • Compliance

  • Diagnosis and repair

  • Research, analysis, and rationale

  • Tradeoffs

  • Operations

  • Design

  • Team coordination

  • Other

What are the terminal learning objectives? (Clark 2013, ch 4 & 7)

The terminal learning objectives of “Making Sense of Mental Health Problems” are:

  • describe key theories and concepts that have informed debates about mental health diagnosis

  • outline how diagnostic systems have been developed and implemented

  • explain why diagnostic systems are challenged in the mental health field

(from https://www.open.edu/openlearn/health-sports-psychology/making-sense-mental-health-problems/content-section---learningoutcomes)

For this specific module, the terminal learning objective is to respond to mental health problems in impactful and optimal ways.

What are the enabling learning objectives? (Clark 2013, ch 4 & 7)

  • Review patient history and personal situations

  • Respond to questions, actions, or objections from patients in the appropriate manner that impacts patient

Complexity of responses (Clark 2013, ch 4)

  • Number of outcomes

    • One outcome

    • Multiple outcomes: each scenario has four different outcomes of varying patient impact based on the decisions made

  • Outcome precision

    • High solution precision: there are right and wrong answers that impact how the patient feels

    • Low solution precision

  • Interface response options

    • Limited interface response options: typically two multiple choice response options

    • Multiple interface response options

  • Social presence

    • High social presence

    • Medium social presence

    • Low social presence: self-paced, selt-study environment

Scenario settings (Clark 2013, ch 5)

  • Office, meeting room

  • Computer

  • Technical shop, laboratory

  • Clinic, hospital, surgical suite

  • Equipment and instrument panels

  • Factory

  • Field site: each scene takes place in a different setting where individuals encounter those in need

  • Other

Trigger event (Clark 2013, ch 5)

  • Phone call

  • E-mail, text message

  • Interview

  • Failure or crisis

  • Murphy’s Law scenario

  • Other: each scene shows a person in crisis and the triggering event is them acting out their anger in an inappropriate way

Does your scenario outcome require identification and analysis of data? (Clark 2013, ch 5)

  • No

  • Yes

Types of guidance provided (Clark 2013, ch 6)

  • Faded support

  • Simple to complex scenarios

  • Open vs. closed response option: response options are limited

  • Interface navigation options: there are very few options at any time

  • Training wheels

  • Coaching and advisors

  • Worksheets

  • Feedback: feedback about the choices made are given at the end of the scenario where learners can assess their choices by comparing to other potential paths

  • Collaboration

Instructional approaches (Clark 2013, ch 7)

  • Tutorials

  • Expert solution demonstrations

  • Questions in demonstrations to promote engagement

  • Cognitive modeling examples to illustrate tacit knowledge

  • Example repositories linked to organizational knowledge base

  • Traditional instructor

  • Socratic instructor

  • Scenario facilitator

  • Other: from what I can gather, this module did not offer any instructional approaches. Because this module does not apply to just experts and can be offered to a wider audience, I don’t think the designers included any instruction and allows the learner to use their own personal experiences.

Feedback features (Clark 2013, ch 8)

  • Specificity

    • Specific: this course gives learners specific feedback about their decisions while also allowing them to see other possible outcomes

    • General

  • Type

    • Instructional

    • Intrinsic: this module is built around giving intrinsic feedback to learners’ choices. Media in the form of video allows learners an immersive environment to see with effects of their choices.

  • Frequency

    • Immediate: this module contains both types of feedback frequency. The immediate feedback is given in the form of an impact meter that always shows above a patient video.

    • Delayed: the module gives details and personal feedback at the end of the module so that the learner can see the overall impact of their choices.

  • Focus

    • Solution: while the module recognizes that solutions to mental health crises are varied, it does base its feedback on the ideal response.

    • Process: even though there is an ideal solution, this module allows learners to make mistakes during the process and practice asking the right questions.

    • Learning


Assessment Rubric

Criteria Exemplary Minor Concerns Serious Concerns Score
3 points 2 points 1 point Enter
Use of scenario-based eLearning Scenario-based e-learning content is for learners with some prior experience and supports one or more of the following: rare occurrence tasks, critical thinking skills training, strategic tasks, compliance-mandates, to compress time, or manage risk e-Learning content may support learners with no prior experience but does support at least one of the following: rare occurrence tasks, critical thinking skills training, strategic tasks, compliance-mandates, compresses time, manages risk. It is unclear why a scenario-based e-learning design was chosen. 3
Complexity of responses The complexity of responses are appropriate for the learning goal, learners expertise, and motivation levels The complexity of responses are on target for the learning goal but not for the learners expertise and motivation levels The complexity of responses are not appropriate for the learning goal, learners expertise, or motivation levels 2
Interface response options The interface response options are appropriate for the learners’ expertise level and learning objectives The interface response options are a bit of a stretch for the learners’ expertise level and learning objectives The interface response options are inappropriate for the learners’ expertise level and learning objectives 3
Scenario settings The scenario setting(s) is/are appropriate for the scenario-learning domains, learners, and learning goals. The scenario setting(s) is/are a bit of a stretch for the scenario-learning domains, learners, and learning goals. The scenario setting(s) are inappropriate for the scenario-learning domains, learners, and learning goals. 3
Trigger event The trigger event is appropriate for the scenario-learning domains and goals The trigger event is a bit of a stretch for the scenario-learning domains and goals The trigger event is missing or inappropriate 3
Types of guidance The types of guidance are varied and appropriate for the learners’ expertise levels, scenario-learning domains and goals The guidance is appropriate for the learners’ expertise levels, scenario-learning domains, and goals The guidance is not the best match for the learners’ expertise levels, scenario-learning domains, and goals 1
Instructional approaches The instructional approaches are appropriate and varied for learners’ expertise levels, motivation, prior knowledge, scenario settings, domains, learning goals, objectives The instructional approaches are appropriate for learners’ expertise levels, motivation, prior knowledge, scenario settings, domains, learning goals, objectives The instructional approaches are not the best for learners’ expertise levels, motivation, prior knowledge, scenario settings, domains, learning goals, objectives 1
Critical thinking Actions taken, decisions made, cues used, rationale, rules of thumb, and monitoring, are used throughout the e-learning to support learners’ critical thinking. Multiple different content-sensitive learner actions, decisions, or rationale, are required throughout the e-learning. Content-sensitive learner actions or decisions are only required in one or two spots in the e-learning. 3
Feedback All feedback (i.e., Intrinsic, instructional, delayed, immediate, specific, general, solution, process, learning, reflection, checklists, rubrics) designs are all appropriately provided for learner actions and feedback is integrated throughout the scenario. A variety of feedback types are provided and appropriate for learner actions. Feedback is limited or not appropriate. 3
Interface Navigation is intuitive Navigation instructions are clearly explained Navigation is difficult 3
Interactions All function properly. --- Do not all function properly. 3
Chucks Content is chunked into small enough pieces that you can easily follow but doesn’t interrupt the flow. Chunks are large but you can easily navigate to where you left off. Chunks are large and there is no way to get back to where you left off or so small the flow suffers. 3
Progression Is logical and elegant throughout the object Is logical throughout the object Seems disjointed or does not build on previous screens 3
Engagement Multiple motivational engagement elements are used (e.g., stories, images, examples, narration) Only one or two cases or story is/are used but it/they include(s) multiple relevant images. Stories or cases are not used, only brief examples. Images may or may not be relevant. 3
Images or video Good quality (e.g., focus, lighting, background) Mediocre quality; you can generally tell what they are but one or more is/are difficult to see or interpret Poor quality; at least one image or video is too small or very blurry. 3
Audio Good quality (e.g., volume, tone, pace, inflection, no distractions) Mediocre quality; you can make adjustments that allow you to access the information Poor quality; you can’t hear some or all of the audio 3
Length Module(s) is/are 6 - 15 minutes Module(s) is/are 15:01 - 20 minutes Module(s) is/are longer than 20 minutes 3
Accessibility minimums Screen descriptions, closed captions, image alt tags, are provided and logical. Closed captions, image alt tags, are provided. No clear evidence of accessibility considerations in e-learning object. 3
Total point score 49 (Better, or best)

Qualitative scoring guide

Better, or best = 47 - 54 points

This module is an e-learning exemplar demonstrating significant evidence of effective instructional design.

Good, accomplished = 38 - 46 points

This module meets the basic criteria for e-learning instructional design.

Needs work = less than 38 points

I bet you could offer some suggestions to help improve the instructional design significantly for this e-learning. 

Based on Clark, R. C. (2013). Scenario-based e-learning: Evidence-based guidelines for online workforce learning. San Francisco, CA: John Wiley & Sons.


ID Review

Below, I commented on my review of each of the criteria that were determined to be below Exemplary.

  • Done well: The responses were easy to understand and matched the tone of the scenarios.

    Improvements to consider: While it complicates the branching, I think the responses could include more choices to cover the gambit of expected responses.

    Why you think these improvements are needed: It makes the responses more believable and realistic to a learner in the field.

    Guidance on how to make the improvements: Ask novice learners what their responses would be to the scenario and use the popular answers as potential responses.

  • Done well: The interface is intuitive and simple.

    Improvements to consider: Offer an option that explains how to navigate through the course.

    Why you think these improvements are needed: Some learners may not be familiar enough with eLearning to comfortably navigate.

    Guidance on how to make the improvements: Include a “help” button (question mark or information icon) or navigation directions at the beginning that displays navigation directions.

  • Done well: The feedback given at the end of the scenario is detailed and actionable.

    Improvements to consider: Consider adding in additional instructional approaches.

    Why you think these improvements are needed: Learners could learn from instruction and comparison to how experts would handle patients.

    Guidance on how to make the improvements: Add a section in the feedback that describes how an expert would navigate through the scenarios.

References

Clark, R. C. (2013). Scenario-based e-learning: Evidence-based guidelines for online workforce learning. (Links to an external site.) San Francisco, CA: John Wiley & Sons.

Adapted from Giacumo, L. Template for your analysis, critique, and assessment.

eLearning Object Review #1

I have the opportunity in this post to review Blue Beta Facilities Orientation.

Image shows a screenshot of the Blue Beta Facilities Orientation training.

In this post, I will be identifying:

  1. the course topic

  2. the relevant characteristics of the target learner audience (what can you infer?)

  3. the knowledge and/or skill type(s)

  4. the learning domain(s),

  5. the assessment method(s) (i.e., response options, test items),

  6. the trigger event(s) (how do you get the learner to act?)

  7. the guidance technique(s) (how does the learner know what to do next?)

  8. the advisor type(s) (if any)

This course is an example of a linear hierarchical (directive) e-learning object.

  • Linear: Information is organized into chunks that naturally build on one another. The learner is limited by the interface to only view the information in a specific order, one topic at a time.

  • Hierarchical (directive): Similar to a textbook’s design, an eLearning course can have several lessons, each with several topics which can be presented with multimedia such as text, images, animation, audio, and video clips (Chyung, 2007, p.3). A directive approach is ideal for workers with new jobs because it offers small chunks of knowledge at a time, and allows learners to observe and listen while periodically responding to questions (Clark, 2013).

The (1) topic of the course is the office facilities and providing any information employees will need to know to work in the facilities both in the present and future. The (2) audience, a company new hire, has no prior knowledge of office layout of rules. They are considered novice learners with high motivation, as they are eager to start their first day. We assume they are familiar with the technology on which the training is being delivered. Orienting activities are usually done at the learner’s pace, so there are no time constraints on this eLearning object. (Of the learning characteristics to consider when developing an eLearning object, age, gender, cultural background, prior education, and prior work experience are not relevant.)

There are three categories of eLearning content: declarative, procedural, and situated. This eLearning example falls under the declarative (4) knowledge domain because it is concerned with “knowing what.” Specifying the topic’s learning category and the level of learning helps developers determine the most appropriate methods and media to deliver the content (p. 4). The (3) content-type is concepts and facts that will be useful to the learner in the future.

The (6) triggering event should be realistic and compelling (Clark, 2013, p.64). In this course, the triggering event occurs on the first page of the eLearning, Here, it provides a quick overview of “why learn this information” and gives them a clear idea of what the expected outcome is so that the learner is prepared (p. 64). The module asks the learner to interact by clicking “Start Course” to view the next topic.

Clark identifies nine types of guidance techniques: faded support, simple to complex scenarios, open vs. close response options, navigation options, training wheels, coaching advisors, worksheets, feedback, or collaboration (p. 77). Articulate Rise uses fairly intuitive navigation options as a (7) guidance technique. As the learner reads from top to bottom, they reach a barrier that asks them to click to continue. This is the typical way an audience interacts with a webpage, both in scrolling and clicking through menus. Because the training is being delivered in this manner, it is important to know the learner’s ability to use the technology. Rise also employs signaling (via animations and hover states) to indicate where the learner should click.

This eLearning object requires a test with multiple-choice and yes/no questions to give immediate corrective feedback during its (5) assessment.

In eLearning, an (8) advisor can appear to provide context-specific guidance or direction at the moment of need (Chyung, 2013, p. 82). Because of Articulate Rise’s simplistic interface, there is no included option for an advisor to appear. A designer could include directions to learners about what to do next, but that would appear alongside the rest of the content as the user scrolls through the lessons.

References

Chyung, S. Y. (2007). Learning Object-Based e-Learning: Content Design, Methods, and Tools. The eLearning Guild’s Learning Solutions. https://www.learningguild.com/pdf/2/082707des-temp.pdf

Clark, R. C. (2013). Scenario-based e-learning: Evidence-based guidelines for online workforce learning. (Links to an external site.) San Francisco, CA: John Wiley & Sons.

Going beyond ADDIE: My introduction to LeaPS

My first job in eLearning revolved around the ADDIE (Analysis, Design, Development, Implementation, and Evaluation) model. At this job, we built custom training for pharmaceutical sales reps. Being on the development side of these training modules, I worked with instructional designers to create assets based on their storyboards. At the time, this was the only exposure I had to IDs and (embarrassingly) thought instructional design was synonymous with designing eLearning modules. After all, the company’s purpose was to develop eLearning training, and it achieved that purpose very well.

Image depicts a round shape divided into five sections, each with its own heading. The sections are A: Analyze, D: Design, D: develop, I: Implement, and E: Evaluate.

A version of the ADDIE model approach to training used by my current organization.

This brings me back to ADDIE. In that environment, ADDIE (at least how we used it) served us very well. A client would come to us with a training request, my team would ask a few questions (the Who, What, When, Where, Why, and How?), we would design a storyboard, develop the training, send it off to the client, and update as necessary based on the client’s feedback and the learners’ reception. Simple, right? …and that was the extent of my relationship to ADDIE.

I think the beauty of ADDIE lies in its simplicity. You can explain it easily to clients. It can be used as a linear step-by-step. When I was onboarding at that job, a one-page pdf document was all I needed to learn the model. Looking back, I can recognize that my experience in an ADDIE-driven environment, while maybe not unique, was certainly not all that it could be. I don’t think I ever appreciated what ADDIE was capable of.

As I would come to learn, ADDIE is not a single model in and of itself, but a family of models with a common structure. It makes sense then that without embracing a more customized and detailed version of an ADDIE model, we did not understand how our deliverables could benefit from the process. After all, looking at the general ADDIE model gives users no indication of what to focus on, what questions to ask, what data points to collect, what deliverables each step should yield, how those deliverables add to the process…while it’s a nice model to reference, it does little to inspire action. No wonder I didn’t appreciate the power of models when I entered my master’s program.

During my OPWL master’s program, I was shocked and overwhelmed by how many models existed to help HPT practitioners accomplish their goals. I had always considered models to be a pretty summary or a way of presenting information rather than a systematic process to be followed. Suddenly I was inundated with dozens of models at my disposal, to help me work through issues. I’m lucky that I built a better relationship with models before I was introduced to OPWL’s very own LeaPS ID model.

LeaPS ID Model, introduced by the OPWL Masters Program at Boise State

Through the mastery of a group of professors at Boise State and their willingness to share their expertise, the LeaPS ID model offers a new approach to instructional design. Building upon and adding to many proven instructional design models, LeaPS gives more detail, insight, direction, and suggestions to beginner instructional designers. Admittedly the model is overwhelming at first glance. The graphic designer in me wants to take a stab at making the information more digestible and fluid (maybe a future post?). But the usefulness of the model itself is undeniable. I already have ideas about its application to my current organization and what it could mean for the future of our deliverables.

If you’re curious, check out the YouTube videos below for more information on the LeaPS ID Model:


What’s next:

  • How my current organization could benefit from the LeaPS approach