Context is King, Content its Queen

Reiterated from https://www.techwyse.com/blog/content-marketing/why-context-is-king-the-connection-and-impressions-of-context-and-content/When Bill Gates in 1996 wrote the “Content is King” article it was within the context of the development of internet as a business case, and how it was important that content would have to transition from old media and formats in ways that would engage and provide an opportunity for personal involvement that goes far beyond that offered through the old media. He was of course right, and what the advertisement industry learned, was that context is even more important if you really want to profit from the individual. It is now time that we truly extend this insight from the business world and transform it into the driving force for learning experience as well by consciously working to lay the groundwork for the workings of a context sensitive environment for adaptive learning.

Contextual learning

Skjermbilde 2016-04-22 10.17.04Growing numbers of teachers today are discovering that most students’ interest and achievements improve dramatically when they are helped to make connections between new knowledge and experiences they have had, or with other knowledge they have already mastered. This approach to learning recognises that learning is a complex and multifaceted process that goes far beyond drill-oriented, stimulus-and-response methodologies. The idea, is that learning occurs only when students process new information or knowledge in such a way that it makes sense to them in their own frames of reference. The mind naturally seeks meaning in context by searching for relationships that make sense and appear useful.

Contextual learning, a theory based on the constructivist theory of learning, which focuses on how humans make meaning in relation to the interaction between their experiences and their ideas, is thought to have the following characteristics:

  • emphasising problem solving
  • recognising that learning needs to occur within multiple contexts
  • assisting students in learning how to monitor their learning and thereby become self­-regulated learners
  • anchoring teaching in the diverse life context of students
  • encouraging students to learn from each other
  • employing authentic assessment

These characteristics all hint to the need to base education on a more bottom up, individual and adaptive model, where both learning experiences, the role of the teacher and the construction of curricula and learning material lays the ground work for adaptive learning experiences which can relieve the pain points for earners which is often apparent in more traditional top-down approaches.

This post examines the roles of content within multiple contexts when creating learning experiences, as well as looking at possible representations and implementations for both content and context in digital learning.

What are the roles of context and content in learning ?

Skjermbilde 2016-04-22 10.17.18Context” is the setting in which a phrase or word is used (from Latin contextilis “woven together”.

Content” is the words or ideas that make up a piece (from Latin contensum “held together”, «contained»),

These definitions illustrate well our idea that content is to be woven and contained by the power of a context as the defining element to determine the relevant representation of the content at any time.

Furter, in the context of Learning eXperience, we add the learner as weaved together with both context and content when creating an adapted Learning eXperience. Context should then not be seen as stable, but rather dynamically changing in accordance with the learners’ interactions with both context and learning material. Interactions are part of the context, and therefore the context can only be predicted to a certain extent, making the boundaries between context and content fuzzy, since content needs to be as dynamic as context. One generates the other and one may not exist without the other when considering the adaptive needs of the learner. This interrelationship makes content and context bleed into each other, but is still based on the basic dichotomy between content and context as self-reliant stand alone entities in themselves , and as such, we should be able to express a set of intrinsic attributes that each have in accordance to the roles played in a learning experience. This in order to determine how we can design solutions that can create true adaptive Learning eXperiences.

What is a learning context?

The definition of what a context is, as anything else, dependent on what context it is discussed in, so to define the premises that could lead to a definition of what our view of what a Learning Context is,  we should start by breaking it down into constituents:

“Learning” as defined by https://en.wikipedia.org/wiki/Learning :

«Learning is the act of acquiring new, or modifying and reinforcing, existing knowledge, behaviors, skills, values, or preferences and may involve synthesizing different types of information. The ability to learn is possessed by humans, animals, plants and some machines.»

This definition of learning comprises most of the properties we see as being part of the learning process. An adaptation of Ambrose & Al.(2010) further completes our view of what learning is and should be:

  • “Learning is a process, not a product.”
  • “Learning is a change in knowledge, beliefs, behaviors or attitudes.”
  • “Learning is not something done to students, but something that students themselves do.”

Context as defined by https://en.wikipedia.org/wiki/Context

When trying the same lookup for «context», the definition does not lend itself as readily, as it is of course contextually ambiguous . If we follow the link to Wiktionary’s definition: https://en.wiktionary.org/wiki/context we find a generic definition which serves well as a basis to build on:

«The surroundings, circumstances, environment, background or settings that determine, specify, or clarify the meaning of an event or other occurrence»

combined with the definition given In de Figueiredo 2005 :

«the set of circumstances that are relevant when someone needs to learn something»

then, if we specify that the set of circumstances should be relevant to the learner’s specific needs in order to learn optimally, the following definition is closer to the scope and context of this article:

«A learning context, is the set of circumstances that determine, specify, or clarify the meaning of an event, statement, or idea, in terms of which it can be fully understood, and in which it meets a learner’s specific needs in order to learn optimally»

Two main types of learning context

In theories and philosophies of learning, the definition of what a learning context is, seem to fall into two main camps:

de Figueiredo 2005 states that one is the view that a learning context is external to the learner and the activities in which the learner is engaged. It is, thus, seen as the environment where the activities take place. Context is then delimited, in the sense that we feel capable of recognizing where it begins and where it ends. It is also seen as stable and driven by immutable laws, so that we can predict its evolution over time and space, even if it changes. Thus, when developing content for a given course, we take context in account beforehand, in the elaboration of our materials, and then forget about it, trusting that its behavior will always be as expected.

In contrast, within the context of learning experience and the constructivist paradigm, context cannot be located and delimited. Context is only perceived through its interactions with the learner, the interactions organising the context as much as they organise the learner’s experience. To a large extent, context is the interaction. Taking this into account, we should then be able to refine our definition of what a learning context might be:

«A learning context, is the set of dynamically perceived circumstances compiled through the learner’s interaction with learning material, that determine, specify, or clarify the meaning of an event, statement, or idea, in terms of which it can be fully understood and in which it meets a learner’s specific needs in order to learn optimally»

Content

Source: https://media.licdn.com/mpr/mpr/p/2/005/0a2/17d/3be30b6.jpgWhat then, can be considered to be content in our evermore complex and interconnected world? First of all, we need to set the premise within this discussion, that we are talking of digital content, specifically digital learning content.  In the context of our previous discussion, content is increasingly perceived as needing to be adapted in order to feel relevant to the individual. Mitchell Kapor likened getting content from the internet to taking a drink from a fire hydrant, indicating that the problem of information overload and info glut which was a primary concern in the early stages of the internet, hasn’t really improved. The shift towards a content-centered, connected, and slowly, but increasingly, more intelligent internet has laid the ground for the emergence of tools needed to overcome the content tsunami, but we still need to apply them systematically, especially within the field of education in order to reach our goal of creating effective Learning Experiences. The question is; what then, are the measures needed to achieve what we could call «digital wisdom». A quote from Daniel J. Boorstin pinpoints our challenge:

«Technology is so much fun but we can drown in our technology. The fog of information can drive out knowledge.»

The technology applied has to lead to a convergence for the individual instead of becoming part of the divergence translating further into the increasing amount of generalised and fragmented content, including learning content produced for consumption on the internet.

According to a presentation made by Steve Wheeler on digital pedagogy, the learners’ experience entails, among other things, to answer these questions:

  • How do I find stuff?
  • How do I know it’s accurate?
  • How do I share content?
  • How do I filter content?
  • How do I keep up with all the news?
  • How do I organise content?
  • How do I categorise content?

These are all pertinent questions about content in an objective or general context, but in order to find content that is individually relevant, the following question has to be answered.

«How do I find learning content adapted to my needs

The amount of commercially adapted content we are seeing today in commercial services such as Google and Netflix, are examples of the need for adaptation being  increasingly met with the growth of more sophisticated functionality implemented in the web as a whole. Based on big data analysis and recommendation algorithms, it is in many cases sufficient when providing product recommendations for commercials and the like to the individual. An optimal learning experience though, demands a higher degree of accuracy to be effective enough. Learning software and frameworks provide a more constrained environment within which it is easier to implement and apply the mechanisms needed to answer the above questions, and if we do, it might in the long run, lay the ground for-, and providing the data needed to provide a higher degree of accuracy to the internet as a whole

A tentative definition of learning content, then could be:

«any digital artefact intended for learning, capable of carrying meaningful information to the end-user»

This definition works in a more objective context, but as we’ve already established, content needs to be adapted to the user’s needs, and so the definition needs to convey this:

«any digital artefact intended for learning, capable of carrying adapted, relevant and meaningful information to the end-user»

Repurposable modular content

Creating adapted content suggests that one to a certain extent, needs to operate with content elements that are repurposable in different context, and as such, as modular as possible. This in order that they might be mixable and re-mixable in concert with other elements, and still form a coherent whole. This is somewhat a challenge to say the least, especially in a traditional educational context, since creating modular content would be quite dependent on context during production, and would become an impossible task when wanting to manually create content for an infinite multitude of possible contexts. Digitally, we think there is a possibility to create software which has the ability to understand and define what would constitute for instance the boundaries of meaning of a given piece of content, how it interfaces with other content, as well as the needed granularity of a given content module.

Modular content would have to satisfy at least the following points:

  • It has to be repurposable
    • making it possible to create multiple versions of the same content for different contexts
  • It has to have a clear, maybe standardised structure
    • smaller chunks of content that represent clear topics as building blocks
  • It has to be presentation-independent
    • raw content without formatting
  • It needs meaningful metadata
    • describing the content for easy querying

The challenge then, is the need to make content reusable and repurposable on the fly based on dynamically assembled attributes from the learner’s context(s). As we have established, the context should always be in the driver’s seat, providing the clues to the whats, hows and whens of the learning experience. And as important, the context should also, to a significant degree, inform the composition of the content itself. Letting context completely determine the composition of content may never be completely possible to achieve, but we believe one can at least reach a close approximation of it. What is needed in any case, is to define what would be the smallest self-sufficient modular content element.

Designing self-sufficient modular content : a concept centered approach

As a company, we have since the early beginnings believed in, and championed topic centered learning in the context of the constructivist learning paradigm. We believe in a topic centered, networked approach to information architecture in aiding learning for the individual, which in many cases aligns with what we believe to be part of how the human brain understands and structures information. A topic centered approach to content aligns with the thoughts of modularity and repurposing, and we’ve been researching what it means for content to be atomic and repurposable.

We find the approach of The Darwin Information Typing Architecture (DITA) to be very interesting in that sense. It is an OASIS standard targeted at structuring content for reuse, based on the assumption that the smallest self-sufficient content element would have to be based on a topic centered approach. A key feature of DITA is that information is organised and stored as modular chunks of content. The chunks, or topics as they are known, can be reused as building blocks of content. Topics can also be “typed”. That is, you can create different types of topics with a predefined structure that is appropriate only to that topic type. All topics in DITA are built on a single model of a generic topic. This generic topic type defines the elements that are common to topics of all types, however, DITA recognises that different topic types need different substructures and allows for variances which makes it interesting in the light of wanting to deal with multiple contexts. The DITA standard content model breaks information down to the element level, e.g., section, paragraph, sentence etc. and assigns topics to these elements. In fact, this passage from the DITA 1.2 spec describes an approach to our exact conundrum:

«Classically, a DITA topic is a titled unit of information that can be understood in isolation and used in multiple contexts. It should be short enough to address a single subject or answer a single question but long enough to make sense on its own and be authored as a self-contained unit. However, as content in many cases won’t behave in a regular manner, DITA topics can also be less self-contained units of information, such as topics that contain only titles and short descriptions and serve primarily to organize subtopics or links or topics that are designed to be nested for the purposes of information management, authoring convenience, or interchange.»

This approach is certainly a viable starting point for modelling modular content, as one can easily imagine traditional document elements like titles, sentences, tables, lists etc. as being instances of a smallest modular element, being capable of both standing alone, as well as being dynamically repurposed and orchestrated in different contexts. If one then assigned metadata to these elements, say from a given ontology representing some knowledge domain, it would be possible to make computational inferences relying on the collected contextual information for a given Learning eXperience, and then to dynamically create adapted content for it. As discussed above, orchestrating perfect content by query and assembly is quite a challenge, since the initial production of content is inherently context aware, but this approach might be able to take us a long way in creating content that is meaningful. In addition, we can imagine the possibility of applying measures such as Natural Language Processing (NLP) algorithms as well as other algorithms from the world of machine learning, to the resulting automatically assembled content in order to improve the text to achieve an even greater degree of coherence.

If we can achieve this, a more complete definition of adapted content might be:

«any digital learning artefact, capable of carrying adapted meaningful information to the end-user, simultaneously capable of standing alone as well as functioning in concert with other artefacts within multiple contexts»

Context + Semantics to produce Adapted Content

Within our definition of learning context, we have to consider the hypothetical possibility that there are as many different contexts for a given piece of content as there are individuals. In real life, realities and individual contexts  group into fewer facets, given the somewhat objective nature of our common reality. The fuzzy distinction between context and content, driven by the individual’s prerequisites, creates the dynamic that should underlie the composition of adapted content and context in order to produce effective learning experiences.

In Gilliot, Garlatti 2009 it is stated that technology ­enhanced learning systems must have the capability to reuse learning resources and web services from large repositories, to take into account the context, and to allow dynamic adaptation to different learners. It further states that reuse of learning resources requires interoperability at semantic level and suggests that knowledge models and pedagogical theories can be fully represented by means of a semantic web approach.

In Guescini & al. 2006, the need for a poly-faceted information architecture is discussed as a way of modelling the multifaceted nature of reality and the artefacts contained within them in order to combat information overload. It argues that information should be consciously designed in terms of horizontal and vertical levels, where each dimension provides its own multifaceted approach to the design, forming a coherent whole together.

The Horizontal level should be determined in practice by attributes like the choice of subject, terms of the language, epistemological assumptions, methods for establishing facts, representation techniques etc. While the vertical level would take into account the different levels of granularity as discussed above regarding atomic and modular content, and as such would take the form as modular elements with varying degrees of detail. The result of this information architecture forms a detailed coordinate system which can serve as a basis for the assembly of content with desired granularity, relevant within the desired perspective context for an adapted Learning eXperience.

Guescini & al. also discusses the technological aspects of how one can implement a polyscopic or multi faceted information architecture by the  use of Topic Maps. This aligns somewhat with the topic centered organisation of the DITA information architecture, which makes the DITA a possible contender for the representation of both the horizontal and the vertical facets as discussed above. In addition, DITA provides a full-fledged model for document structure representation, all serialised in XML, making it well suited for computational processing when needing to automatically assemble adapted content.

The ‘Topic Map’ elements in the DITA standard seem to be modelled loosely on the Topic Map standard, suggesting that it might pair well with Topic Maps when wanting to add semantic technology as the driving force behind a learning experience application. The DITA standard is by no means closely coupled to the Topic Map standard, and one could easily choose RDF when implementing a semantic backbone. What is important, is that it operates with topics, topic types, and the notion of perspective, which is most easily modelled with Topic Maps, but can also be done with RDF by jumping some extra hoops. Even more important; DITA operates with the notion of ‘relational tables’, making it possible to express interconceptual relations, in effect creating ontologies, making it very suitable in tandem with semantic technologies.

If one considers these technologies in terms of the context / content dynamic, they do not clearly place themselves within one or the other, but the emphasis on content structures in DITA suggests that it would be more instrumental in representing content, while semantic technologies could be more instrumental in modeling contexts, as well as providing the tools needed to infuse logic and intelligence, resulting in adapted learning experiences.

Conclusion

We have examined the roles of learning content and learning context and found that the boundaries between them are fuzzy, and that learning content needs to be as dynamically defined as learning contexts. In the context of Learning eXperience, the one does not provide the learner an adapted experience without the other. We have expressed tentative definitions for both, trying to express as accurately as possible their nature and function as part of an adaptive learning experience. We have looked at possible technological representations for both, providing a starting point for further research within the more limited scope of learning environments, with the goal of providing a possible technological ground for a more adaptive learning experience outside these environments as well, as the internet as a whole is still in dire need of more structured ways of adapting content and context for end users.

References

Ambrose, S.A., Bridges, M.W., DiPietro, M., Lovett, M.C., & Norman, M.K. (2010). How learning works: Seven research-based principles for smart teaching. San Francisco: Jossey-Bass

An adaptive and Context-Aware Architecture for Future Pervasive Learning Environments Jean-Marie Gilliot, Serge Garlatti {jm.gilliot, serge.garlatti} @telecom-bretagne.eu 29/09/2009

Guescini, Karabeg, Nordeng (2006), ‘A Case for Polyscopic Structuring of Information, Charting the Topic Maps Research and Applications Landscape’, Volume 3873 of the series Lecture Notes in Computer Science pp 125-138,

de Figueiredo, A. D. (2005), ‘Learning Contexts. A Blueprint for Research’, Interactive Educational Multimedia 11 , 127-139 .

Lagre

What Is Your Learning eXperience?

What if you could learn in an environment that knows the Whats, Whens, Wheres, and Hows needed in order for you to have an optimal learning outcome? The LXMatters website is where we, a group of enthusiasts within the EdTech space spearheaded by Cerpus, a learning technology company in Northern Norway, document and discuss our research into how we can apply a methodological approach to digital learning resulting in truly adaptive learning experiences. What’s more, we would like to take this opportunity to encourage and invite other people to contribute their thoughts, knowledge, insights, and experience with regards to everything learning experience-related. Your contribution to the LXMatters initiative is genuinely appreciated.

Wikipedia states the following about the more general topic of User Experience:

User eXperience (UX) refers to a person’s emotions and attitudes about using a particular product, system or service. User experience includes the practical, experiential, effective, meaningful and valuable aspects of human–computer interaction and product ownership.

User experience may be considered subjective in nature to the degree that it is about individual perception and thought with respect to the system. User experience is dynamic as it is constantly modified over time due to changing usage circumstances and changes to individual systems as well as the wider usage context in which they can be found.

We’re not here to give another definition of UX, we are inspired by the basics of UX: testing, modifying, test again, modify again, to inform the production and functionality of digital learning environments, as well as to give users the best Learning eXperience possible.

The first requirement when creating an optimal User eXperience is to meet the exact needs of the user. The same requirement is true for Learning eXperience (LX); in order for learning to be effective, the learning experience must meet the needs of the individual learner.

In an article at The Glossary of Education Reform it is stated that:

Learning experience as a term is in growing use by educators and reflects larger pedagogical shifts that have occurred in the design and delivery of education to students, and it most likely represents an attempt to update conceptions of how, when, and where learning does and can take place.

Learning experience may also be used to underscore or reinforce the goal of an educational interaction—learning—rather than its location (school, classroom) or format (course, program), for example.

This definition underlines the importance of learning, and more specifically the importance of interactive learning, rather than instruction. Based on this, the learning experience becomes the focal point, which is continually adapted and iterated, using UX theories, to create the best learning experience possible for the learner. The principle of focusing on learner interaction as the core of learning experience, and of learning itself, isn’t new, as it has historical roots in areas such as Experiential Learning and Constructivism.

Wikipedia states the following for experiential learning:

Experiential learning is the process of learning through experience, and is more specifically defined as ‘learning through reflection on doing’. Experiential learning is distinct from rote or didactic learning, in which the learner plays a comparatively passive role… experiential learning considers the individual learning process. As such, compared to experiential education, experiential learning is concerned with more concrete issues related to the learner and the learning context.

Experiential Learning Model
Experiential Learning Model

Experiential learning focuses on the learning process for the individual. Because of the direct involvement of active learning, the learner makes discoveries and experiments with knowledge firsthand, instead of hearing or reading about the experiences of others.

David Kolb, an educational theorist and proponent for experiential learning developed the “Experiential Learning Model” (ELM) which was made up of four elements:

  1. Concrete experience
  2. Observation of and reflection on that experience
  3. Formation of abstract concepts upon the reflection
  4. Testing the new concepts (and then repeating the whole process)

Steve Wheeler discusses that Kolb’s model aligns with Jean Piaget’s constructivism, where accommodation and assimilation are seen as part of a process which leads to internalization of knowledge by learners. Piaget’s notion of “knowledge schemata” were thought to be:

  • Critically important building block of conceptual development
  • Constantly in the process of being modified or changed
  • Modified by on-going experiences
  • A generalized idea, usually based on experience or prior knowledge

The idea Piaget puts across, is that new learning happens when the learner revises, elaborates, adapts and balances what he/she already knows with the new learning experience he/she is part of. In a neurological sense, the brain/mind is seen to constantly work on building and rebuilding itself as it takes in, adapts/modifies new information, and enhances understanding.

The processes and mechanisms Piaget and Kolb put forward are still valid and relevant, even if the context within which their theories were born has changed drastically. They universally suggest to us that learning is an active, iterative, adaptive and dialectic process which happens within set contexts. A study analyzing whether interactive learning increases student performance, documents increases in examination performance that would raise average grades by a half a letter, and that failure rates under traditional lecturing increase by 55% over the rates observed under interactive learning.

Active, interactive learning can increase student performance by 55%.

Kolb and Piaget’s theories were based on a learning process focused on the student as an individual (active) learner, which is the core rationale in our understanding of adaptive learning and learning experience. Had they formed their theories within the context of today’s technologically advanced and social learning environments, they would probably have taken into account that digital learning, as well as, collaborative learning in the context of the social web is showing great promise. LXMatters is a collaborative research platform focusing on how one can further evolve new models and methods for how future learning experiences might take place.

A Scientific and Methodological Approach To Learning Experience

The dialectic nature of learning within the constructivist learning approach suggests that one can profit from expressing a methodological approach to digital learning. With current technologies we have the possibility of creating intelligent software that can learn and adapt according to the changing nature of the learners’ needs, such as learning styles, individual learning path, and so forth. What needs to be constant, is the underlying methodology, ensuring common functionality as well as individual adaptivity, and last, but not least, consistency as a the common denominator for all learners and their learning experience.

UX Processes As A Blueprint For LX Processes

Looking at an outline of UX processes from UX Mastery, it is easy to see how this process could be a blueprint for the LX process.

The UX process illustrated below aligns with what we envision an LX process to be, basing our assumptions on the fact that learning experience, especially in a digital environment, is essentially based on the same attributes and needs of the learner as end users have in the UX paradigm. The emphasis on the continuous iterative and dialectic processes to align with the changing character of the needs of the user would be at the center of an LX method as well.

UX Process
UX Process

1. Strategy

The strategy, in UX terms, articulates guiding principles and long-term visions of organisations and businesses. In a learning experience methodology it would play the same role, in that it would guide and shape any learning experience design in order to reach the ultimate goal of delivering a tangible and valid learning outcome. The methodology would outline how we build systems for digital learning learning experiences, as well as, guide the iterative learning process connected to the interactions of the learner.

2. Research

The research phase in UX terms is often referred to as the discovery phase, where the results of research on the target users’ needs is seen as the key to an informed user experience. In order to provide the learner with an adapted experience, the research phase is crucial in learning experience, as well. In LX terms, specifically applied to software, this phase would be in the form of a continuous research and collection of the learners’ attributes, including: learning style, skills, contextual needs, and progress, and storing it in a Learning Record Store (LRS) by the means of xAPI statements.

This measurement and collection of data about the learner, establishes the basis for Learning Analysis in the following analytic phase of the process.

3. Analysis

The aim of this phase is to draw insights from the collected data, making inferences which can guide the subsequent iterations of the process. In LX terms, the data would be analyzed to gain insight on the learner and provide suggestions on how to improve the overall learning experience.

With data about learners collected and stored in an LRS in the research phase, one is ready to process the data in order to produce outcomes that will drive suggestions for iterations. Several different analytical methods may be used, depending on what fits best for the type of data that has been collected. The following are common analytical methods that may be used:

  • Knowledge analysis, aiming to capture the degree of the knowledge of the learner within a given knowledge domain
  • Content analysis of resources created by learners, such as essays and the like
  • Discourse analysis, aiming to capture data from the learner’s interactions as well as properties of the learner’s language used during interactions
  • Social Learning Analytics, aimed at exploring the role of social interaction in learning

The statistical outcomes of these analytical methods already contain enough valuable data to inform the next step in the process. But, to really be able to predict and suggest an adaptive learning experience as accurately as possible for the learner, we need to apply machine learning techniques to the data.

Machine learning, a subset of artificial intelligence, is a way of programming computers to identify patterns in data as input to algorithms that can make data-driven predictions or decisions. As we interact with computers, we are continuously teaching them what we are like. The more a user interacts, the smarter the predictions become.

This step would entail modeling artificial neural networks representing data such as subject matters and the learner’s interactions with subject matter content, predicting probable outcomes and learning paths. The resulting outcomes and predictions would then suggest iterations in next phases in the process.

4. Design and Production

In the case of a digital learning application, the previous steps in the UX process guides the initial design, wireframing and prototype production of the application, as well as the subsequent iterative processes, which continuously improves of the product in terms of UI and UX, creating an optimal user experience.

These two phases merge in LX, as the data from research and analysis are processed to automatically propose the initial information architecture, logic and design of the user interfaces in the product.

5. Reiterate

The reiteration phase of the UX process focuses on continuously evaluating and re-evaluating a project or product, through rounds of revisions in order to improve the overall user experience. Reiteration is also at the heart of LX. By basing the learning experience process on algorithms derived from machine learning, this step of the process is inherently iterative and adaptive, as it is driven by parameters like the constantly changing state of knowledge and contextual needs of the learner.

A Tentative LX Method

The method draft outlined below is illustrated below as an inherently gradual and iterative process with no apparent end or conclusion guided by a consistent set of rules. This reflects the lifelong nature of learning, and the need for learning experiences to be methodologically founded both in philosophy and technology for adaptivity and consistency.

Experience and Interaction With Learning Material
Experience and Interaction With Learning Material

Prephase: Context

Context is always king, and as such, an initial measurement of the learner’s learning environment is necessary. Attributes measured might include: gender, age, culture, language, former education, goals, learning style, or any data previously collected from learning experiences stored in an LRS or from services like Mozilla Backpack.

1. Learner Interaction

The learning material, along with learning paths and other similar data, have been rendered to provide an adaptive learning experience for the learner. The learner interacts with the adapted learning materials, which is used as the baseline for data analysis.

2. Discovery and Learning

The rendered learning experience exposes the learner to content and other data, which by the method has been tailored and adapted with the goal of aligning with the current dialectic process in the learner. This process is at the core of this phase, lending from the constructivist theories it is built upon. The learning experience supports the learner by creating an environment that challenges and inspires creativity and immersion, in turn, allowing for independent thinking and new ways of learning.

3. Data Collection

Data about the learner’s context and interaction are collected and marshalled into xAPI statements and stored in a LRS. These statements contain data about “Who,” “What,” “When,” “Where,” “How,” and form the basis for the next step, where we start the statistical analysis of our data.

4. Analysis

In this phase, statistical analysis of the data takes place. At this point, highly advanced analytical methods aren’t necessary, as we are looking to abstract simple inferences to lay the groundwork for further intelligent adaptive learning. The abstracted data functions as a self sufficient backbone which will be enriched by machine learning outcomes as soon as they have reached a mature enough state.

For the next steps in the process, data is converted into the appropriate data structures and delivered to our machine learning services for further processing.

5. Deep Learning

The statistical data collected during the previous phase is fed into machine learning algorithms, where it is processed and becomes part of what the services already know for a given learner. Artificial neural networks and other data models representing a learner’s learning experience and criteria are updated with new weights and probabilities as the software continues to learn about the learner from the incoming data. As the networks mature in knowledge they are able to create increasingly accurate predictions by inference, which are returned and passed on to the next step in the process.

6. Algorithmic Composition of Adapted Learning Paths and Materials

With xAPI statements and plain statistical data, combined with predictions from the machine learning algorithms, one has a sufficient basis for the algorithmic composition of adapted learning content, as well as recommendations for alternative learning paths through learning content for a given learner. The result of this composition results in data structures and metadata guiding the next step of the process.

7. Adaptive Learning

As the data resulting from the machine learning analytics take time to reach a state of relative accurate expression, the rendering of learning experiences should still strive to achieve the uttermost of adaptivity for the learner. The minimal measure of adaptivity would be to render according to the contextual needs of the learner combined with the more naive statistical analysis of xAPI statements.

Wash, Rinse, and Repeat

Since the nature of learning experiences are to be adaptive, reiterations and redesign are at the core of the process, which aligns with the UX process for creating optimal User Experiences.

Conclusion

LXMatters is a research effort into how we can apply a scientific, methodological approach married with insights from User Experience to learning and the learning experience as a whole.

Combining UX thinking of continuous evolution of ideas through testing and revisions, with notions of how individual learning happens, we believe we can develop a methodology for creating better learning experiences.

The LXMatters blog will document the research and processes done on the road to a robust, tried, and proven methodology applicable when creating any digital learning system with adaptivity as a goal for learning experiences, leading to an optimal learning outcome.

Glossary

Constructivism
A theory of knowledge that argues that humans generate knowledge and meaning from an interaction between their experiences and their ideas.
Dialectic
Also known as the dialectical method, it is a discourse between two or more people holding different points of view about a subject but wishing to establish the truth through reasoned arguments.
Adaptive Learning
An educational method which uses computers as interactive teaching devices, and to orchestrate the allocation of human and mediated resources according to the unique needs of each learner. Computers adapt the presentation of educational material according to student’s learning needs, as indicated by their responses to questions, tasks and experiences.
xAPI
Also known as the Tin Can API, is an e-learning software specification that allows learning content and learning systems to speak to each other in a manner that records and tracks all types of learning experiences.
Learning Record Store (LRS)
Is a data store system that serves as a repository for learning records necessary for using the xAPI.
Machine Learning
Gives computers the ability to learn without being explicitly programmed. Machine learning explores the study and construction of algorithms that can learn from and make predictions on data.
Deep Learning
Is a branch of machine learning based on a set of algorithms that attempt to model high-level abstractions in data by using multiple processing layers, with complex structures or otherwise.
Predictive Analytics
Is a branch of machine learning based on a set of algorithms that attempt to model high-level abstractions in data by using multiple processing layers, with complex structures or otherwise.
Artificial Neural Networks
In machine learning and cognitive science, artificial neural networks (ANNs) are a family of models inspired by biological neural networks (the central nervous systems of animals, in particular the brain) and are used to estimate or approximate functions that can depend on a large number of inputs and are generally unknown.
Algorithm
In mathematics and computer science, an algorithm is a self-contained step-by-step set of operations to be performed. Algorithms exist that perform calculation, data processing, and automated reasoning.

Join The Conversation

Leave us a comment with your thoughts related to how LX thinking can be applied to create adaptive learning. Finally, if you want more stories like these delivered to you, sign up for the LXMatters newsletter.