Development/History of SLQAT

The SLQAT was developed (2015-2018) by a research team from the University of Minnesota (Andy Furco, Laurel Hirt, Isabel Lopez, Anthony Schulzetenberg) and the University of Georgia (Paul Matthews, Shannon Wilder), supported through a grant funded by the U.S. Department of Education's Fund for the Improvement of Postsecondary Education (FIPSE) "First in the World" program. The instrument has been presented at numerous venues, including the International Association for Research on Service-Learning and Community Engagement, the Gulf-South Summit on Service-Learning and Civic Engagement, and internationally to audiences in Hong Kong, Argentina, and elsewhere. In 2018, a service-learning capstone course at UGA's New Media Institute developed the online version of the tool.

The SLQAT incorporates 28 "essential elements" from research on high quality service-learning, organized into five dimensions. The elements were reviewed, piloted, and narrowed to the current list, each of which also includes descriptive text to help raters decide if the element is present, and if so, how well it is implemented. Each element carries a numerical value or "weight," developed iteratively based on expert and practitioner opinions of the importance of its contribution to quality service-learning outcomes.

Piloting is currently underway to develop resources and test interrater reliability.

Using the SLQAT

The SLQAT may be used for different purposes, such as instructor self-study, course design, faculty development, and as a research instrument. For valid scoring, both foundational and supplemental data sources are needed, and more than one rater should score independently, then compare and resolve ratings. The SLQAT scoring is based on a review of both foundational data sources and of supplemental data sources.

  1. The foundational sources for scoring the SLQAT are the course syllabus and all course-specific materials provided to students (e.g., assignment guidelines not incorporated into the syllabus; student contracts for service-learning; information about community partners, placements, or projects; pertinent service-learning handouts from the institution's service-learning office, etc.).
  2. Supplemental data sources for the SLQAT rating include interviews with/statements from the instructor; information from the campus service-learning office, the community partner, and/or students who took the course; deliverables from the service-learning activity; student reflections; etc. If needed and available, more than one of these supplemental data sources should be secured and reviewed to help enhance the accuracy and confidence of ratings.

For "low-stakes" purposes (e.g., self-study, faculty development, etc.), the SLQAT may be used with only the foundational sources. However, these foundational materials alone will likely not provide sufficient evidence to reliably determine the presence/inclusion of particular service-learning elements.

At least two raters should score a given course independently. Each rater decides if the element is absent or present; if present, they use the evidence to decide on the level or quality of implementation, across a scale. (Each element has a "behind the scenes" baseline value representing the hypothesized importance of that element's contribution to service-learning quality outcomes; this base weight is modified by the level of quality.) Next, raters compare and discuss their individual ratings, seek additional information if necessary (e.g., from the instructor), and agree upon a final rating for each element. The SLQAT sums all element rating scores to determine an overall Service-Learning Quality Score. Because every element is considered important for service-learning quality, a score of zero (absent) for any element will substantially reduce the overall final summed Quality Score.

The SLQAT Online

For the web-based SLQAT, each rater should review the use guidelines and rate each element based on the evidence for presence and quality. Raters can enter their individual scores for each element, then can enter the generated code to review and compare with other raters and to determine a final, agreed-upon rating for the course.

Using the SLQAT Tool - survey
  1. To start the Tool, please click on the "Start" button on the homepage.

  2. This will take you to the instructional page. These pages tell all about the SLQAT tool and what it is used for. Feel free to flip through the information; or there is an option to skip the instructional and go right to the tool.

  3. Once you skip or finish the instructional, you are asked to either select "Start New Survey" or "Continue a Survey". If you choose to continue a survey, it will prompt you for the code to reclaim your past answers and may continue the survey

  4. If you click Start New Survey, the first page you will see is the required information about the instructor and the rater rating the certain instructor. Once clicking save, you will be brought to the first dimension and are able to start the survey. A code will also be emailed to the rater so he/she may access their survey at a later time if needed.

  5. After each dimension, remember to save your progress by clicking the save button.

  6. After you have reached Dimension 5, you will finish the survey by clicking on the "Finish" button.

SLQAT Comparison Tool
  1. To start the comparison tool, simply click on the "Compare Surveys" Tab. Only accessible on the home page or after you have finished a survey

  2. To look at a certain instructor for a course, please fill in the information and click "compare".

  3. A table will show with each rater and their implement level and the weighted score.

  4. Features: If two or more rater's rate the same score for a particular element, it will highlight green. After each dimension, it will have the subtotal for that dimension. It also shows the final score and the ratio of elements present/absent.

SLQAT Elements and Dimensions

Or Click here to view.