5 | Assessment Using E-Portfolios, Journals, Projects, and Group Work
The shift to online learning in higher education creates a fertile environment for potential synergies between authenticity and assessment, and no better way exists to exercise authenticity in assessment than by portfolio. Here, we will refer to e-portfolios, which are portfolios that are no longer paper-based but are now mounted online, usually using a platform such as Mahara.
Simply put, a portfolio is a collection of parts, often called “artifacts,” that has been constructed or compiled by learners wishing to demonstrate their competence in a certain area. While learning institutions use portfolios that are designed for knowledge demonstration, other types of portfolios also exist, for example, “showcase” or performance portfolios, designed to showcase individuals’ value to their organization for purposes of advancement, to secure a position somewhere, or to peddle wares. Technology has accelerated portfolio popularity and purpose by creating many different platforms accessible for users who have no particular design skills.
Within educational institutions, portfolios have increased in popularity on many fronts. Many programs in universities have introduced portfolios as a means of assessing learners’ aggregated work over the course term. Some graduate programs at Athabasca University, an open and distance university in Canada, have replaced comprehensive exams with portfolios. Undergraduate programs have also implemented e-portfolios, reported by University Affairs (Bowness, 2014, para. 2) to be “way past trendy” now. Using the not-uncommon metaphor of a journey, students, through the portfolio process, are understanding their learning to be ongoing and sustainable. An undergraduate science student’s e-portfolio at Canada’s McMaster University is described here:
His own e-portfolio exemplifies the tool at its best and most typical: blog-like, with banners, navigation menu and photos. Content-wise, Mr. Narro’s e-portfolio includes pages detailing his employment and his academic and extracurricular activities, along with a section called “Courses” describing the nuances of his iSci program and another titled “Experiences” containing photographs and reflections on his geological field trips to places from Illinois to Iceland. (Bowness, 2014, para. 4)
The e-portfolio permits learners to accumulate, build on, and reflect on the shape of their learning experience throughout their programs, making cogent observations and connections among learning experiences over a period of time. Learners report benefit from their sustained engagement with the project and from having the time and the tools to reflect on their work and their progress. Officials from another Canadian university have indicated their interest in e-portfolios, as they are perceived to be “valuable beyond assessment. . . because you’re able to see the whole person” (Bowness, 2014, para. 10). Additionally, in a very logistical but simplistic way, an e-portfolio mounted on the computer is more organic, colourful, modern, and exciting than a box full of collected papers to today’s digital-native learners. What better way to authenticate one’s learning and make sense out of theoretical or abstract knowledge in a day-to-day real world?
Recognition of Prior Learning E-Portfolios
Another very specialized use of learning portfolios in many educational institutions is for assessing and recognizing learners’ prior and experiential or informal knowledge. Called by various names, the recognition of prior learning (RPL) uses a portfolio in which learners “collect, select, reflect, and project” (Barrett, 2000) the breadth and depth of their experiential learning according to standards and processes set by the institution. Naming conventions in the RPL world are important and often confused. An internationally applied process, RPL goes by a number of names. In Canada, it is referred to as both RPL and PLAR (Prior Learning Assessment and Recognition.) In the United States, it is largely referred to as PLA (Prior Learning Assessment). Elsewhere, in Europe, it may be called RPL or APEL (Assessment of Prior and Experiential Learning) or APL (Assessment/Accreditation of Prior Learning). Similar variations exist in Australia and South Africa. Depending on institutional standards, RPL processes can be arduous and taxing; accordingly, the credit reward allocation will also vary.
While the purview of this chapter does not include explicating the various systems or methodologies of RPL practice, which vary substantially around the world, we emphasize the value of this type of assessment given its authenticity; as well, RPL offers a good potential contribution to alternative methods of assessment in today’s changing education world.
Good RPL practice holds that knowledge, once surfaced, must be presented in an acceptable format and then responsibly assessed so that learners receive appropriate credit for their prior learning. When learners’ journeys are about and of their own experience, they are fulfilling one of the central tenets of authentic learning, which include the following:
• Authentic learning is “ill-defined,” thus requiring learners to self-define tasks and activities.
• Tasks are complex and sustained.
• Tasks provide opportunities for applying multiple perspectives.
• Tasks provide opportunities for reflection and collaboration.
• Authentic learning surpasses specificity and can be both integrated into different areas and extended.
• Authentic learning permits a variety of outcomes and competing solutions (Reeves, Herrington, & Oliver, 2002).
The exercise of recounting and recasting one’s prior learning during portfolio preparation reflects all aspects of authentic learning. Learners’ prime focus is their own history. They are the subjects of their explorations. Their lives’ events provide a tapestry of ill-defined activities that must be recalled, investigated, and understood for their learning value and placed conceptually and sequentially into various kinds of documents that usually include a narrative description and some form of explicit learning detail. Athabasca University has a very rigorous prior learning assessment system (http://priorlearning.athabascau.ca/index.php) that requires learners to produce a series of precise learning statements that are aligned with course and program learning outcomes and reflect various levels of learning achievement as set out by Bloom (1956) in his taxonomy of learning.
The timeline created by learners, from their past learning histories through to their vision of their learning future—which they are consciously working on and toward—creates a fabric of sustained engagement with their own learning and with “self.” Self-reflection is key; meaning-making is one of the most difficult tasks in portfolio preparation. Meaning is internally generated from learners’ own experiences. Those experiences must be “selected and collected” (Barrett, 2000), a process which, in itself, requires a degree of critical reflection and engagement with the larger, envisioned outcome.
As discussed in Chapter 4, Dron (2014) has accused learning outcomes of trying to bridge the gap between “knowing how” and “knowing that” (p. 296). Such confusion of purpose may be the case with many learning outcomes given the necessary specificity of language and the difficulty in obtaining such specificity and clarity from writers who may not be sufficiently trained in the nuances of language. However, careful application of Bloom’s language from his taxonomy (1956) serves to differentiate types and levels of learners’ knowledge, because some verbs point more directly to actions (“how”) while others point to the possession of knowledge (“that”). For example, if I research a learning activity, have I designed it? If I designed it, did I create it? Or did I implement it after it was researched and created by others? In the RPL process, specificity of language carefully chosen by the learner is intended to demonstrate “knowing that,” while the follow-up documentation of learning claims should affirm, by outside attestation, that the learner knew “how.” However, we would agree with Dron when he says that examining the picky nuances of language is not an exact science in any of our work, and there exist many possibilities for looseness and error.
On the assessment side of this process, RPL practitioners often describe the process of externalization in metaphors of “yanking” or “pulling” learners’ buried knowledge out of them as they prepare their learning portfolios for assessment (Conrad & Wardrop, 2010). This is difficult work for both learners and their coaches or mentors. The rewards, however, are sterling. Learners report high levels of satisfaction, revelation, and personal growth—in addition to the credit received as a result of their prior learning.
For their part, RPL assessors spend several hours with the e-portfolio.5 A cognitively based task, their evaluation of the e-portfolio seeks to affirm a triangulated presentation of the learners’ grasp of the importance and meaning of their prior and experiential learning. The articulation of their learning must, as an authentic product, situate the learning in a real-world time frame that shows growth and development; it must relate the learning to the external world, professionally and perhaps personally; it must project the potential for that learning into professional or life-related future contexts. Assessors must judge on issues of clarity, breadth and depth, relevance, and level of learning presented. That the demonstrated learning must be appropriate to the university study at hand is a basic tenet of prior learning assessment at institutions of higher learning.
Interestingly, assessors’ comments indicate that they often feel affirmed and informed having read through a learning e-portfolio (Conrad & Wardrop, 2010; Travers et al., 2011). Learners’ reflections and sense-making of their learning and career/life trajectories offer assessors new eyes through which to view their own practices or teaching. In this way, assessment continues to be about learning—for all those involved.
It is extremely difficult to falsify an RPL portfolio, and the fact of this provides further endorsement of the effectiveness of authentic assessment practices and of this type of document specifically. There are several reasons why this is the case:
• The e-portfolio demands a type of triangulation of data given its requirement for various artifacts to support each other: the learner’s up-to-date resume; a narrative autobiography that outlines and highlights learning activities through learners’ pasts; in the case of the Athabasca University model, extensive sets of learning statements detailing learning that can be documented in the resume; and items of documentation itself that are received by university personnel directly and can be verified and traced back to the originator of the document if necessary.
• The e-portfolio is a labour-intensive and time-consuming document. It is unlikely that an “imposter” could or would donate his or her time over a period of time to compile such a document.
• Learners become known to university staff in several ways. In some systems, there is face-to-face contact via office visits, webcam verification, or interviews. In other distance institutions, sustained contact via telephone—for mentoring purposes—establishes a relationship that is cemented with the exchange of many work and life details and the continual “yanking and pulling” of past learning from the learner that would render impersonation almost impossible.
• Learners, in conversation with prior learning personnel, are often required to discuss or reference their extant learning at the institution.
The engagement of learners with their learning is key to successful e-portfolio preparation and, hopefully, to a successful assessment by portfolio assessors. Learners, upon completion of their portfolio, usually report an experience that has been arduous and difficult, but also unique and personally rewarding (Conrad & Wardrop, 2010). The raising of self-esteem and personal confidence, and a new awareness of professional potential are also consistently reported by learners (Prior Learning Centre, n.d.).
Just as the e-portfolio presents a sustained, dynamic, rigorous learning and assessment opportunity to learners, so too does the learning journal. Journals as learning tools are both loved and disdained by learners and teachers alike. Journal dislike arises primarily from several sources:
• Learners resent the amount of time that the journal might consume. (We write “might” because it need not consume an inordinate amount of time, although the potential is there, for those who are more naturally reflective than others, or for those who appreciate the scope that journals usually permit.)
• Some learners are uncomfortable being asked to write down their personal thoughts or opinions. A related source of concern with journals involves their assessment and learners’ thought process that goes like this: These are my thoughts; they are personal; they should not be reviewed for evaluation—or read by anybody, for that matter.
• Some learners, in some programs, suffer from “journal fatigue,” having been given journal assignments one too many times. And some learners have engaged in journal-writing processes that were not well disciplined or organized.
Like the portfolio, the learning journal asks learners to reflect on their learning over time, often over the entire duration of a course. Its purpose is to create a record of the learner’s journey through the course and its materials and resources, including the insights that the journey has wrought; possible exchanges with other learners and with the instructor; and connections that the learner has made with his or her life, learning, and work. Most learning journals allow for a broad range of reflective material.
Journals offer the instructor or the assessor the opportunity to look for growth over time—growth in knowledge, in critical thinking, in the development of comprehension or appreciation of a topic. Journals can be structured so that learners are asked to follow a theme or a topic throughout the course, but the more effective journals, in our opinion, give learners free rein to create their own repository of reflections. Journals can also be used as a vehicle for instructor-learner conversation throughout a course—as a sustained activity whose purpose is the exchange, rather than an assignment that results in a grade. Or, perhaps both, reflecting again the complexity of formative and summative assessment.
Journals as Instruments for Assessment and Evaluation
We alluded above to the fact that some learners have concerns with the notion that the personal thoughts recorded in their journals are read by an instructor or that the journal is assigned a grade based on these musings or reflections. To the first concern, Fenwick and Parsons (2009) stipulate that the purpose of the journal should be made clear to learners. Instructors should clearly outline what they expect in the journal: That it is not a diary, that it is not a log of daily activities, and that is not a venue for personal confessional-type material. These are not difficult distinctions to establish, and good examples can be provided. Learners can be cautioned and guided to refrain from sharing sensitive material and still conform to the assignment’s expectations, which may include a demonstration of attention to course materials, topics, and themes; critical thinking; and reflections by the learner on his or her own evolution or growth, in terms of learning, during the course.
There are other strategies that can be adopted to facilitate the assessing of journals. The suggestions that follow may address, to an extent, learners’ concerns about privacy. One strategy involves requesting a short synopsis of the entire journal, perhaps two pages, about 500 or 600 words. Learners can be instructed to “highlight” their summary reflections in this short paper, to draw out the most important learning that they experienced, and to comment succinctly on the process of having engaged in sustained journal writing. Instructors can guide the structuring of this document by stipulating certain questions to keep the learner on track, for example: “How would you describe the most critical learning incident from this course?” Or, “What aspect of your course learning will you take forward as you continue your studies?” This document serves a couple of purposes. It forces the learner to revisit the lengthy journal and critically peruse it, and it forces the learner to be succinct and squeeze some very important concepts into few words. This is a process somewhat akin to guiding thesis- or dissertation-writing learners on the development of their research questions: It’s hard to do, it’s key to the success of the research, and well-written research questions usually require several tries.
The idea of creating a shorter document to capture the essence of the longer document involves grading. Instructors may wish to just assign a grade to the synopsized version, adhering to the rubric that has described its shape, thereby downplaying the sense that learners’ feelings or personal musings are being evaluated. Again, how effective this result is depends on a number of factors that only instructor and learners can know. Or, instructors may assign an automatic “completion” grade to the actual document to acknowledge that requirements for the assignment have been met, while restricting judgment of the contents to a grade on the shorter paper. There are many variations of this theme. As always, however, the ultimate decision in both assessment and grading must reflect the course’s intention and its stated learning outcomes; both assignment and assessment must complement the balance of the course’s design.
Self-assessment is another strategy that can be considered in the management of the learning journal. Following a template provided by the instructor, learners use a close reading of their journals to respond to very specific questions that are designed to elicit some critical thought and analysis from the journal’s contents. Learners assign themselves a grade for their journal; the instructor submits a grade for the summary response. As with all self-assessed documents, instructors should have in place a strategy for the self-assessment protocols. (See Chapter 9 for more on self-assessment).
Whatever the means of assessment adopted, instructors should take care to treat journals with confidentiality and respect the learner’s work as a reflection of that person’s experience in the course. Fenwick and Parsons (2009) suggest “liberating” learners from the academic-style correctness (APA, for example) that structures formal written assignments. Create a type of “free space” for creativity and personality.
Assessing learners’ journal reflections offers instructors an opportunity to experience learners’ insights of a nature and perhaps a scope that exceeds the confines of usual assignment topics. The well-done journal can turn a topic-related musing into an exploration of previously untouched thought. The sustained and consistent journal can document the progress of a learner’s unfolding grasp of a topic, a learner’s attempt at connecting disparate ideas toward theory-making, or a learner’s struggle or success with conceptual material. And whereas wary learners may feel that journal assessments are sitting in judgment of their feelings or opinions, rigorous and appropriate assessment should be an evaluation of thinking, application, analysis, and synthesis—in fact, an indication of Bloom’s (1956) cognitive levels. The rubric that accompanies journal assessment should indicate the structure and outcomes that the assignment calls for.
The rewards, for both instructors and learners, of journal writing have been hinted at in the sections above. This assignment provides latitude for learners to exercise creativity, introspection, and thoughtfulness—infused with personality—while attending to course themes but not being restricted by narrow parameters. It allows them to draw the course content into their own thinking and experiences, and vice versa. It can manifest in Vygotsky’s Zone of Proximal Development or produce the fruits of shared knowledge building, revelation, even transformation, which, as Mezirow (1997) understood it, is a changed perspective slowly developed in learners over the duration of a course.
For instructors, the journal often opens the window into the mechanics of a learner’s learning. Like the curtain being lifted on the Wizard of Oz, instructors can glimpse the inner workings of the learner’s process as he or she has lived it. This offers a type of insight that is rarely afforded the instructor when learners are asked to write on an assigned topic. Logistically, however, if a journal is submitted at the end of the course, the insights and revelations that so often are unveiled come too late for instructors to act upon or acknowledge, except in feedback on that particular document. Conversely, to take the journal in after a shorter amount of time could deprive learners of the chance to develop and expand their thinking to an optimal degree.
Projects and Group Work
When is group work not group work? When it’s a project! While this version of the old joke is not quite true, it can be made to be somewhat true in that the abundance of media tools available to online learners permits a wide range of exciting activities that learners find enjoyable and worthwhile. Many students simply say that learning is fun when they can step away from problem sets or research papers and actually engage with materials or tools. As Windham (2007) points out, they relish the opportunity to be creative, to build, and to experiment with Web-based presentation tools, mind-mapping software, YouTube, video clips, audio enhancements, and graphics. While this is not intended to be a comprehensive list, it points to the vast choice available to learners to enliven a project assignment. Such media-based assignments can be done solo, but the dynamics of group work offer learners much more opportunity for creativity, collaboration, and knowledge building. The solo project is often just an assignment with another name—a piece of work constructed by one learner to complete a task whose purpose it is to demonstrate mastery or comfort with course material. It is the group work project that usually attracts the most attention—and the most disdain.
Group Work Challenges
Roberts and McInnerney (2007) tackle the issue of group work in “Seven Problems of Online Group Learning (and Their Solutions),” naming the problems as follows (p. 257):
• Student antipathy to groups
• Selection of groups
• Lack of group skills
• The “free rider”
• Inequalities of student abilities
• Withdrawal of group members
• Assessment of individuals within the group.
As they point out, these problems are interrelated and often causal. Roberts and McInnerney (2007) identify the assessment of individuals within groups as the primary group work issue. Based on our personal experience, that may well be true, but the interrelatedness and causality of group issues make the problem of assessment even more nuanced and difficult. Over what factors does the instructor have control in her groups? Where does her skill and experience most come into play? How can she become aware of a problem before a group’s dynamic deteriorates and learners are put into a potentially harmful social situation? When should she intercede?
In our experience, learner antipathy to group work is historical, usually the result of previous bad group experiences. Bad experiences, in turn, often result from inequity in group members’ skills, the “free rider” phenomenon, and perhaps the unanticipated withdrawal of group members. While instructors can blithely guarantee their learners a better experience “this” time, care must be taken to put measures in place that will foster constructive group activity. Some group work issues are discussed in the following section.
Selection of Groups
Group member selection is one of the ways to improve the group process. While acknowledging that random selection can sometime work just as well as anything else, Roberts and McInnerney (2007) argue that deliberately selecting a heterogeneous group is the best solution. In this way, the levels and diversity of experience are mixed, and the possibility of ending up with cluster of similar backgrounds, geographical locations, or some other circumstance is avoided. Another useful strategy is to allow learners to self-select their groups further on in the course, when they are more cognizant of their peers’ learning styles. Watchful instructors, however, must be wary of clique-ism and the possibility of exclusion of members from groups.
Differences or Lack Thereof in Group Skills
This problem can, in part, be considered in terms of “inequality of student abilities.” We must assume, as a starting point that each class is going to contain a variety of abilities, strengths, and weaknesses. A deliberate selection will in most cases result in a mixture of abilities within the group. Some learners will learn from other learners. Some learners will be frustrated with the input of other learners. From our own experiences both as teachers and learners, we see this as inevitable. To counter these effects of inequality as much as possible, the instructor will explain the function and expectations of the group as clearly as possible and perhaps outline some ground rules for process. That process may include reporting on group progress. It may include assigning learners to specific roles within the group or asking that roles be selected internally by the group without assistance from the instructor. Learners can also be directed to literature on group function, in some easy-to-access “how-to” format.
Regardless of instructional efforts, groups will most likely produce a leader and some followers, some happiness and some unhappiness. It is often the case that those who are initially unhappy will admit to a satisfying outcome once the process has concluded and, with sound direction and some good fortune, the group has succeeded in the task. Response to and reflection on the group process can often be found in learners’ journals, when journals are used as ongoing documents constructed throughout the course, as described above.
Every learner and instructor is familiar with this issue. Perhaps you have experienced it yourself; as an instructor, you have no doubt had learners complain to you about their “free-riding” group members. Tied in to group members’ abilities, life’s vagaries, and general inequality, the group member who does not pull his or her weight is all too common. Roberts and McInnerney (2007) suggest two alternative forms of pressure that can be applied to address this issue. Pressure can be applied to group members in the form of instructional pressure—through specific assignment of roles or detailed instructions—or peer pressure, which equates to giving permission to group members to either voice their dissatisfaction, privately or publicly, or self-assess the group’s functioning. Group members self-assessing on group performance need not be synonymous with evaluation. Qualitative input can suffice. All these strategies can be uncomfortable for learners (and for instructors) and require tact, respect, and careful instruction.
Assessment of Individual Group Members
Does it ultimately come down to this? Many would say, “Yes, individual assessment of my effort in the group is what works best, is what is fair.” Supporting this reasoning, Roberts and McInnerney (2007) cite literature that maintains that “assigning group grades without attempting to distinguish between individual members of the group is both unfair and deleterious to the learning process. . . and may in some circumstances even be illegal (!)” (p. 264). Webb, however, counters that the “purpose of assessment is to measure group productivity” (Webb, as cited in Roberts & McInnerney, 2007, p. 264), highlighting the need for measuring learners’ ability to interact, coordinate, cooperate, solve problems, and resolve conflicts. Does peer-assessment or self-assessment adequately address those outcomes? They could. How can these “fait accompli” processes, already accomplished by the time the instructor receives the finished product, be properly measured?
This nest of situations creates one of the differences in procedure between face-to-face learning, assumed to take place in a bounded environment—a classroom—and online learning and its unbounded space. In the former learning environment, it may be possible for a teacher to observe a group’s interplay and activities or to, in some way, ascertain how the group is functioning (or not) together. (Whether the instructor chooses to act on these insights or observations is another matter.) However, no such prerogative exists online. Without the ability to receive insights or data from observation or physical presence, if a judgment on group process or individual contribution to process is required or desired, instructors must implement measures to receive such data. Some options include,
• Assessment of an individual’s contribution to a group project. To conduct such assessment, learners submit a report on their own contribution to the group project, along with—most likely—evidence of that contribution.
• Peer assessment of individuals’ contributions. In this case, each member of the group submits an assessment of each member’s contribution. To many instructors (and learners), this may seem indelicate. To ameliorate potential feelings of “unpleasantness,” instructors might supply a template or form that contains a rating scale and space for comments.
• Self-assessment. Each learner would submit to the instructor a self-commentary on his or her contribution. A variation of this option is to have learners make these decisions among themselves prior to submission.
• Progress reports. Each term, depending on the complexity and scope of the task, learners provide weekly progress reports of “chunks” of the assignment for review and revision. This process may enable the instructor to become aware of potential problems sooner rather than later. Progress reports also highlight the importance of the process rather than the product.
In each of these situations, instructors can blend, in some reasonable proportion, the collective grade for the completed project with these data from individual assessments. There is no “easy” way to collect this data. Somewhere, somehow, some hard decisions and reporting must occur. Making all the conditions of assessment clear to learners before the activity commences is critically important for fairness. Such clarity should enhance performance and lead to superior outcomes and less “free-ridership.”
Roberts and McInnerney (2007) suggest a method whereby each learner submits a pie chart diagram indicating percentages of members’ contributions. They propose that this activity be done individually, and they comment that this system works well.
Tuckman’s (1965) seminal research outlined the stages of group formation and performance. Garrison and Archer (2000) refined group understanding in a more precise, education-related fashion, using Pratt’s (1981) work. For these educators, groups have three stages, the first of which compares to Tuckman’s “forming” stage, where clarity of instruction and purpose is of prime importance. Learners need to know, and focus on, the intent of the group and the task at hand. Once past this stage, learners tackle the work (“performing”) and must address all the challenges that come with producing a product as a group. They will, understandably, experience conflict, negotiation, reconciliation, and cohesion.
The third and final stage of group development is termed “ending.” Assessment—and the apprehension of assessment—forms a part of the ending stage. Closure and acceptance also form part of the “ending”; a well-defined assessment plan will help with both those aspects of winding down the group project.
Problems and anxiety aside, group work can provide constructive and positive outcomes to fulfill the constructivist mandate. Given a content-related but ill-defined topic, and the encouragement to use concrete examples, learners working together as a group will bring their own experiences to the assignment. The group project, enacted in this way, provides many benefits to both learners and instructor:
• Learners extend out of the usual text-based realm, creating new interest in the task.
• Learners can demonstrate a new range of skills brought to the fore by working in a new media environment or with new tools, whatever they are.
• Peer appreciation of others changes or grows.
• Tech-savvy learners teach other learners new software or tools; each learner feels empowered.
• The opportunity or need to research course topics beyond assigned readings or textbooks introduces some learners to topic knowledge that might have gone unnoticed.
• Learners practice group learning skills, organizational skills, and personal skills.
• The presentation of the final project online affords learners another opportunity to explain and promote their work; it affords the group “audience” another opportunity to observe and reflect on the thinking and the process of others; and it affords another lens into the application of an authentic response to the topic.
Garrison and Archer (2000) stress the need for authenticity within the group, especially for the group leader, so that the prevailing attention-to-task and resultant engagement can facilitate the group’s work. A “group leader” can refer to the instructor, who is ultimately responsible for assigning groups, roles, and tasks, or to a student leader within the group. In the case of the latter, some specific instruction from the instructor in leadership or group expectations would serve well if the role is to be well executed. Singer, Astrachan, Gould, and Klein (1975, as cited in Garrison and Archer, 2000) suggest that a good group leader is task oriented and focuses on the agreed task at hand.
E-portfolios, journals, projects, and group work all provide opportunities for learners to authentically engage with learning materials and, likewise, to be assessed for meaningful and authentic performance. E-portfolios are becoming increasingly useful in many different environments, RPL being one of them. E-portfolios are also serving graduate learners as authentic vehicles for the demonstration of knowledge, often replacing comprehensive exams. Projects and group work have long formed parts of assessment strategies and are no less useful in online learning than in face-to-face courses. On the contrary, access to innovative media increases the attractiveness of project use in online learning, although the fact of “distance” decreases instructors’ ability to observe group process, therefore increasing the need for adherence to well-explained and well-understood assessment processes.
5 The authors are most familiar with an RPL process in which the terminology assessor is used. Similar processes sometimes use the term evaluator. As is often the case in these twin processes, both are in part correct. Further discussion on the importance of language in RPL processes can be found in Conrad (2011).