Welcome to the GSTA blog! 

In an effort to keep the Graduate Student Teaching Association (GSTA) blog current, we regularly welcome submissions from graduate students as well as full-time faculty. Recently we have made the decision to expand and diversify the blog content to include submissions ranging from new research in the area of the Scholarship of Teaching and Learning (SoTL), public interest topics related to teaching and psychology, occasional book reviews, as well as continuing our traditional aim by including posts about teaching tips. The blog posts are typically short, ranging from about 500-1000 words, not including references. As it is an online medium, in-text hyperlinks, graphics, and even links to videos are strongly encouraged!

If you are interested in submitting a post, please email us at We are especially seeking submissions in one of the five topic areas:

  • Highlights of your current SoTL research
  • Issues related to teaching and psychology in the public interest
  • Reviews of recent books related to teaching and psychology
  • Teaching tips and best practices for today's classroom
  • Advice for successfully navigating research and teaching demands of graduate school

We would especially like activities that align with APA 2.0 Guidelines!

This blog is intended to be a forum for graduate students and educators to share ideas and express their opinions about tried-and-true modern teaching practices and other currently relevant topics regarding graduate students’ teaching.

If you would like for any questions to be addressed, you can send them to and we will post them as a comment on your behalf.

Thanks for checking us out,

The GSTA Blog Editorial Team:

Hallie Jordan, Sarah Frantz, Maya Rose, Raoul RobertsTashiya Hunter, Laura Mason and Megan Nadzan

Follow us on twitter @gradsteachpsych or join our Facebook Group.

  • 30 May 2019 9:44 AM | Anonymous

    By Matthew Mulvaney, Ph.D., and Rachel Razza, Ph.D., Syracuse University

    Co-teaching (or team teaching) can be an effective approach for faculty to work collaboratively to deliver new courses that reflect their combined expertise. Here we discuss the approach that we took to develop a course from a co-teaching perspective. The two of us (both faculty in a Human Development and Family Science department) wanted to ensure that our graduate students would be trained in structural equation modeling (SEM). Our context for developing this team-taught class was based on our shared belief that graduate students in our program need to learn the basics of SEM, along with our observations of the limited options available for learning SEM on our campus. We had to be creative in constructing such an opportunity, however, as neither of us are trained methodologists and thus we felt unequipped to tackle this course alone. Therefore, in order to cultivate this opportunity for our graduate students, we pushed ourselves to develop this course via a co-teaching approach.

    The idea for this course and our team teaching approach originated from a Graduate Student Research Seminar Series in our department that was initiated by one of us, but ultimately presented as a collaborative effort. The seminar series consisted of four, 2-hour sessions where students explored the basics of SEM using key variables that we constructed using data from the Fragile Families and Child Wellbeing Study. The seminar series was more successful than we imagined, as we routinely had 10-12 graduate students per week who were eager to participate and learn with us! We took away three important lessons from this seminar series. First, it was clear that the students would require more advanced training in this analytic technique, especially if they planned to build more complex models in their own research. Second, it was evident that in order to provide them with this knowledge, we also required additional training. And third, we were eager to continue this journey as collaborators, as having someone to help prepare, test, and teach the material proved to be critical in the seminar series. Thus, we identified a summer grant from our university that would allow us to create and teach this course. We spent a significant amount of time consulting with faculty at peer institutions on course design and assisting each other in strengthening our understanding of the material. We met once a week for approximately 8 weeks to select readings, prepare the syllabus, build models to test, and create assignments. In addition, one of us completed additional statistical training in SEM and shared his new knowledge and skill as we prepared this course.

    The course we constructed was taught over two weeks in the summer, with 5-hour days of teaching. We both attended all sessions but alternated primary responsibility on an every-other day basis, split up the grading of the homework, and co-constructed exams. The daily course routine was consistent. Each session included a lecture based on a chapter from an introduction to AMOS textbook. After the new material was presented, students participated in a guided activity where they constructed models in AMOS and ran the analyses that were included in the textbook chapter. In preparation for class, students also read examples of journal articles that reflected the specific approach to modeling that they were learning in the chapter. These articles were chosen by us during the course-planning phase and we used class time to dissect the models and discuss the results of these current studies. The daily sessions wrapped up with time for questions and an introduction to the homework. The homework assignments paralleled the skills that were taught that day but were based on a secondary data set that we constructed with the help of our external consultant. Thus, the students were practicing on one data set during class time and transferring their skills to a different data set for the homework assignments.

    Previous work has identified critical features necessary for the development of team-teaching approaches, including a shared commitment to the co-teaching process and commitment to constructive, reflective discussion throughout the design and delivery of the course (Lock et al., 2016). We would concur that our ability to openly discuss our challenges with the content at all stages of development and delivery, while building off each other’s strengths, was essential. We could both be honest about the challenges we were having and then use the other for more effective support. This dialogue also infused itself into the classroom, where it provided students an opportunity to observe a model of a collaborative professional relationship where different techniques and teaching practices were implemented simultaneously in a supportive learning environment (Chanmugam & Gerlach, 2013). The co-teaching approach seemed to work very well and the feedback was very favorable overall for the class.

    For those who are interested in pursuing co-teaching opportunities, we would put forth the following suggestions. First, identify a need or potential area of development within your department or across departments and then consider potential collaborators. Obviously, selecting an effective co-teacher is essential. Co-teaching is an intense process and you need to find someone that you can feel comfortable working with, who will be committed to the process, and who is as excited as you are about developing the project. You should also survey the supports that might be possible to develop the course. Institutional support is critical to facilitating team taught courses (Morelock et al., 2017). Costs to consider include those associated with course preparation and course delivery.

    While we are not aware of any instances of graduate students and faculty together in co-teaching approaches, it may be a potentially fruitful area to explore. The benefits of co-teaching extend to the instructors who teach it, as it allows them to develop their knowledge and skills in a particular domain (Carpenter, Crawford, & Walden, 2007; Marshall, 2014). Thus, we think that further explorations of such models may be beneficial for developing graduate student teachers and simultaneously moving forward innovative curriculum.


    Expanding your comfort zone handout


    Carpenter, D. M., Crawford, L., & Walden, R. (2007). Testing the efficacy of team teaching. Learning Environments Research, 10, 53-65. doi:10.1007/s10984-007-9019-y

    Chanmugan, A., & Gerlach, B. (2013). A co-teaching model for developing future educators’ teaching effectiveness. International Journal of Teaching and Learning in Higher Education, 25, 110-117.

    Lock, J., Clancy, T., Lisella, R., Rosenau, P., Ferreira, C., & Rainsbury, J. (2016). The lived experiences of instructors co-teaching in higher education. Brock Education Journal, 26(1), 22-35.

    Marshall, A. M. (2014). Embedded professional development for teacher educators: An unintended 'consequence' of university co-teaching. International Journal of University Teaching and Faculty Development, 5, 17-30.

    Morelock, J. R., Lester, M. M., Klopfer, M. D., Jardon, A. M., Mullins, R. D., Nicholas, E. L., & Alfaydi, A. S. (2017). Power, perceptions, and relationships: A model of co-teaching in higher education. College Teaching, 65(482-191. doi:10.1080/87567555.2017.133661

    Author Bios

    Dr. Matthew Mulvaney is an Associate Professor of Human Development and Family Science at Syracuse University. He teaches courses in parenting, child development, and family theories. He is currently serving as the chair of the SRCD Teaching Committee.

    Dr. Rachel Razza is a an Associate Professor of Human Development and Family Science at Syracuse University. She has served as the department’s Graduate Director and as member of the Teaching Committee for SRCD. Dr. Razza has received several grants focused on curriculum development and pedagogy in higher education and was honored with the Syracuse University Teaching Recognition Award in 2014.

  • 22 May 2019 10:31 AM | Anonymous

    By Jessica Hartnett, Ph.D., Gannon University

    On the very first day of my Introduction to Statistics class, I show my students this and tell them that upon successful completion of my course,  they should add this to their resumes:

    Special skills:

    • Novice data analysis using JASP software, including descriptive statistics, t-tests, ANOVA, chi-square, regression, and correlation

    Over the course of the semester, I work with them, talking about different statistical tests, analyzing and interpreting countless examples using JASP, and learning basic, regimented APA style Method, Results, and Discussion section standards so my students can “talk” statistics. I want them to live up to that special skill claim and to feel comfortable doing statistics. I teach like this because I believe that statistics instructors are in a unique position to teach a core proficiency within our discipline that is also a specific, highly marketable skill.

    Notice that I didn’t include performing statistics by hand anywhere in that paragraph. I do very little by hand calculations in my classes. Why? Because statisticians don’t. And if students need to understand the guts of data analysis, they will go on to graduate school in a quantitative field. And guess what? Most of our BA/BS students ARE NOT doing that. The American Psychological Association Center for Workforce Studies (APA, n.d.) counted up 3.5 million people in the US who have bachelor’s degrees in Psychology. So you know how many of them have PhDs in psychology? 4% of them do, and 10% of them have a Master’s in psychology. A full 53% stopped with the bachelor's degree and the remainder pursued post-baccalaureate degrees outside of psychology. As such, what skills do we need to teach to serve the majority of our students? Basic, novice stats skills, with the assumption that they will learn more in-depth statistics if they pursue graduate study that requires it.

    Teach them statistics so that they can keep up with data and research within their careers and non-quantitative MS/MA programs. Teach them research methodology along with statistics so they can be valued by their employer and trusted to run the occasional correlation or ANOVA, or so that they can help with someone else’s data collection. Teach them enough about statistics that they are not going to fall for click-bait headlines that poorly summarize research.

    Another benefit of not belaboring by-hand calculations is that it leaves you time to do other things. Like mastering analytic software, calculating alternatives to p-values, and teaching your students how to talk statistics. Rather than focusing on by-hand calculations, my students leave our class feeling confident in JASP and able to produce rough APA reports that include effect sizes and confidence intervals. Which brings me to my next point: Picking appropriate software for novice statisticians who probably aren’t going the academic route.

    Most of your students are not going to graduate school and will probably never see SPSS again if you do use SPSS. I would wager that the small proportion of students you are teaching who do go on to graduate school likely won’t see SPSS again, either. And, your students  aren’t rich and they are on the go, so let’s use free software options they can run off their own machines and, maybe, even tablets and mobile devices. You could use JASP, PSPP, R, Google Sheets, or Jamovi. Both JASP and Google Sheets can be used via web browser, for added flexibility. For a free-ish option, you can teach students to conduct statistical analyses via MS Excel, especially using the data analysis add-ins. I use JASP. It is intuitive and doesn’t take up a lot of RAM. Plus, if your students are graduate school bound, they can use the free JASP/R hybrid program, JAMOVI.

    In terms of my mini-APA style reports, I have them create a Methods, Results, and Discussion section for each test we run. The Methods and Discussion are only one to two sentences long (I said mini) and the Result section teaches them the basics of in-text statistical reporting. My Methods sections also include effect sizes, CIs, and p-values. As I tell my students, the point of conducting statistical analysis is to share your efforts with others, many of whom do not understand statistics at all. Mini-APA style reports teach them this skill, and also allow me to assess my students’ understanding of the analysis.

    *If you have any questions about teaching stats or need any help, feel free to email me: And then re-email me a week later because I can be horrible at email. Or just DM/follow me on Twitter, @notawful.

    **Important caveats relevant to my argument: You may be teaching stats as part of a sequence. Consider what role you have in changing that sequence. How can you work with other instructors to maintain consistency for your students? In my department, we just switched to teaching our Intro Psych class with JASP, and we’re teaching our Stats Lab, Multivariate, and Psychometrics classes with R/Jamovi. This was a big shift but everyone was on board with the change. Now, we don’t have to charge our students a lab fee for our statistics class, either, which I think is an improvement. We also have small class sizes, 30 or less, and significantly smaller advanced classes, which are required for our Bachelor’s of Science track. Additionally, I pick my own textbook and I don’t teach as part of a stats/research sequence. I also teach mostly non-psychology majors.


    American Psychological Association. (n.d.). CWS data tool: Degree pathways in psychology. Retrieved May 8, 2019, from 

    Jessica L. Hartnett is an associate professor of psychology at Gannon University in Erie, PA. She enjoys studying novel methods for teaching statistics and research methods, best practices in obtaining informed consent, and positive psychology. In her spare time, she reflects upon how lucky she is to have a philosopher husband who understands the demands of an academic career and two beautiful sons who doesn’t care about the demands of her academic career in the least.

  • 07 May 2019 6:00 PM | Anonymous member (Administrator)

    By Maria S. Wong, Ph.D., Stevenson University

    Growing up in Hong Kong, I immigrated to Canada with my family at the age of 17. I vividly remember my first day of class as a senior in a public high school. It was a big surprise for me to find that no student really seemed to care when the instructor stepped into the classroom. It was not until the instructor started speaking that the students slowly quieted down and got ready for class. Similar to the experience of many international students, I was used to standing up and greeting the instructor in unison with the rest of the class. Another surprise came when it was time for class discussion. My education in Hong Kong has taught me that instructors often have the right answer in mind when they posed a question. However, my classmates were used to entertaining different points of view. Most of them also felt comfortable speaking their mind and were ready to defend their view.

    Instructors with international backgrounds have probably experienced similar culture shocks. Research on individualism and collectivism (Hofstede, 1980; Triandis, 2001) could explain some of these cultural differences, with people from individualistic cultures (e.g., European Americans) tending to value individual uniqueness, and people from collectivistic cultures (e.g., East Asians) tending to value social hierarchy and group harmony (Oyserman, Coon, & Kemmelmeier, 2002; Triandis, 1995). Indeed, it took me a few years to process and realize how my cultural background has influenced my own learning and teaching. With this blog post, I hope to share “three don’ts and do’s” with fellow instructors who are also thinking about similar issues related to their teaching.

    Three Don’ts

    1. Don’t assume disrespect right away

    For instructors who came from cultures that emphasize hierarchy and authority, they may have a harder time interpreting the casual demeanor of college students in the U.S. From emails that read like text messages (LOL) to in-person exchanges, instructors may automatically assume that the students are being disrespectful. While it is possible that the students are behaving in a disrespectful way, their behavior may reflect possible cohort or cultural differences. From my own experiences, students sometimes engage in unprofessional or immature behaviors without any bad intentions, and these can be turned into teachable moments that can ultimately benefit the students.

    2. Don’t get bogged down by ESL (English as a Second Language)

    For a long time, I was very self-conscious and insecure of my spoken English. Over time, I have come to realize that my accent has no implications for how well I teach. I am now more focused on whether I am communicating information to my students clearly, and how else I could support my students’ learning, than the proficiency of my spoken English. Interestingly, for the past few years, I have been teaching a course on Writing in Psychology for our majors. Students seemed relieved when I shared with them that English is my second language and that I also struggled with writing in college. They explained to me that finally there was an instructor who could understand their struggles. It was a beautiful moment when I felt connected with my students by sharing my vulnerabilities.

    3. Don’t get fixated on negative feedback

    I have to confess that I still have a hard time with this: I tend to focus on the one single negative comment and ignore the rest of the positive comments from my student course evaluations. While we are hardwired to pay attention to threats (e.g., Shoemaker, 1996), I think it is important to put everything in context. For me, it means that I can never process student feedback the first time I read them. I have to remind myself not to put too much weight on a single negative comment, unless it is something that has been raised by multiple students.

    On a related but separate note, you may receive negative comments from students because of your race, ethnicity, gender, age, etc. Do not feel that you have to process those comments alone. Share them with a trusted colleague. Nasty comments reflect nothing about you, but only the person who was making those comments.

    Three Do’s

    1. Do use your multicultural experience as an asset

    One class that I consistently teach is Human Growth and Development. To this end, I have found that my multicultural experiences have enriched the stories that I share with my students. For example, I usually begin the first class by sharing my own story: how I grew up as an only child living in a 600-square-feet apartment with my parents, maternal grandparents, and my aunt in Hong Kong. I am convinced that personal stories are a powerful tool to build rapport with students and encourage them to think about the role of culture in their own development.

    2. Do seek a trusted person as a teaching mentor

    I am a strong advocate for junior instructors to seek out teaching mentors. I have had the privilege of working closely with a mentor for the past four years. My mentor has offered me great help in processing my thoughts and emotions related to challenging teaching moments. There were also times when I was not sure whether I was overreacting because of my own biases, so it was good to have a trusted person to help me understand and interpret the situation from a different perspective. I am now serving as a mentor for a junior faculty member in Brazil and I hope to take on a supportive mentoring role through our regular Skype meetings.

    3. Do have a growth mindset toward teaching

    Stemming from Confucianism, my education experience in Hong Kong emphasized the mastery of material, which was often achieved through deliberate practice and memorization. In contrast, stemming from Greek philosophy, my education experience in North America was Socratic, which emphasized learning through the process of self-discovery (Tweed & Lehman, 2002). For my teaching to be effective in the North American context, I have learned the importance of designing and incorporating engaging class activities that help students come to knowledge via critical thinking, which is something that I have not experienced much in Hong Kong. For me, having a growth mindset—believing that my teaching ability can be developed through practice—really motivates me to reflect on my experiences and helps me strive to become a better instructor every day.

    Taken together, my multicultural experiences have become a core part of my identity. While there are certainly difficult moments when I navigate between different cultures, culture is ultimately what makes my classes come to life. To this end, there are countless moments when my teaching is enriched as I incorporate elements of culture and diversity. While all instructors can certainly bring these elements into their teaching, I do think that those of us with multicultural backgrounds tend to be more inclined to understanding the nuances of cultural differences, which can become a great asset for our teaching. We have so much to offer!


    Hofstede, G. (1980). Culture’s consequences: International differences in work-related values. Newbury Park, CA: Sage Publications, Inc.

    Oyserman, D., Coon, H. M., & Kemmelmeier, M. (2002). Rethinking individualism and collectivism: Evaluation of theoretical assumptions and meta-analyses. Psychological Bulletin, 128, 3–72.

    Shoemaker, P. J. (1996). Hardwired for news: Using biological and cultural evolution to explain the surveillance function. Journal of Communication, 46, 32-47.

    Triandis, H. C. (1995). Individualism & collectivism. Boulder, CO: Westview Press.

    Triandis, H. C. (2001). Individualism-collectivism and personality. Journal of Personality, 69, 907–924.

    Tweed, R. G., & Lehman, D. R. (2002). Learning considered within a cultural context: Confucian and Socratic approaches. American Psychologist, 57, 89-99.

    Maria S. Wong is an Associate Professor of Psychology at Stevenson University. She teaches courses such as Writing for Psychology, Human Growth and Development, Introduction to Psychology, Statistics for Social and Behavioral Sciences, and Parenting. To her students, Dr. Wong is known for her energy, enthusiasm, supportive guidance, and the use of creative learning activities. As a developmental psychologist (Ph.D. in 2011 from the University of Illinois at Urbana-Champaign), Dr. Wong has an active research program focusing on children’s social-emotional development within the family context. Her work has been published in journals such as Child Development and the Journal of Family Psychology. Dr. Wong is a member of the Society for the Teaching of Psychology (STP) and serves as an Associate Editor for the STP E-book series. She also serves on the teaching committee of the Society for Research in Child Development (SRCD) and was a Co-Chair of the SRCD 2019 Teaching Institute.

  • 01 May 2019 10:00 AM | Anonymous
    By Jennifer A. McCabe, Ph.D., Goucher College

    Last year, as part of my portfolio for promotion to Full Professor, I wrote a teaching philosophy statement. As this is required for nearly every teaching position in academia, this was not my first draft. In fact, I had written a solid statement for my tenure case just six years ago. At first I asked myself, did anything really change in that time? I soon realized that, in a way I could not articulate at earlier points in my career, I could now identify six core principles that guide every teaching-related decision I make. I hope that by sharing these principles I can encourage others to identify and develop their own set of guiding principles as higher education practitioners.


    1. Strategies for Durable Learning

    My scholarship focuses on learning strategies that benefit long-term memory, and I have become more intentional about my responsibility to integrate these evidence-supported memory principles into the structure and delivery of my courses. Early in my career I was worried that some of these choices would be unpopular with students. It took more time and confidence in the classroom to commit fully.

    Now I do so transparently and unapologetically. This includes the use of frequent, effortful, low-stakes, cumulative, spaced (distributed) retrieval practice (a.k.a. quizzes) - followed by discussion to encourage elaboration and connections - in all of my courses. It’s amazing how readily students get on board with these strategies even though they require more time and effort than traditional class practices.

    2.              Interest-First Approach

    Memory researchers also know about the self-reference effect, that we remember information more easily if it relates to ourselves. I enact this principle by adjusting the entry point to many topics and assignments, allowing student interest to be the guiding force. If they begin work on a complex topic using a question or a problem sparked by natural curiosity, I trust that deep and durable learning of the content will follow. I give students much more agency in their assignments and approach to learning than I did in my early years of teaching.

    I also prioritize a narrative approach to course material - learning psychological science through stories. I assign popular press books and articles whenever possible to promote real-world applications of concepts, ensuring a connection between my students’ coursework and their lives. To further engage student interest, I make sure to include (and prioritize) interactive elements such as demonstrations and discussions during each class period.

    3.              Integration and Connection

    I intentionally structure my classes to encourage students to make meaningful connections, drawing on the principle of elaboration. Every day in class, I emphasize how current material links with past topics, and prompt them to continue this work in assignments outside of class.

    For example, I ask students to connect various course topics at the end of the semester as part of a final exam take-home essay. They may identify connections between various aspects of the class with course themes, draw on course material in writing a narrative about each stage of cognition needed to play a game of their choosing, or write a story about how each component of memory would contribute to a day in a person’s life. I have also enjoyed the challenge of embracing college-wide Theme Semesters in my courses, including topics of Mindfulness and Storytelling.

    4.              Authentic and Shared Learning

    I strive to increase the types of activities and assignments that are situated in authentic real-world issues, and that are shared beyond a submission to the instructor. I have found that students are far more engaged—and submit higher-quality products—under these circumstances.

    For example, sharing can happen among classmates in the form of each student reading a different article on a certain topic and then coming to class ready to teach (and learn from) peers in small group discussions. Or they may present mini-TED Talks, consisting of an engaging 5-minute oral presentation on a course-related topic of their choice. There is also great value in having students share their work in other settings, such as presenting at the college’s student symposium or creating resources on course topics for the public. Each semester my students in a seminar on Cognition, Teaching, and Learning complete a “Translational Project for a General Audience.” Formats include podcasts, infographics, videos, games, and one time even a children’s book. Several students who wrote posts in the style of The Learning Scientists blog were subsequently published on this site. Talk about sharing the authentic work of learning!

    5.              Metacognitive Self-Reflection

    My research program also focuses on metacognition, specifically the extent to which students know about and use effective strategies. This scholarly interest permeates all my courses with the goal that students learn about, and reflect on their own, learning. The main idea is to embrace desirable difficulties: learning strategies that are initially slower and harder, but produce more durable memories. Students experience in-class demonstrations showing the memory benefits of these strategies (e.g., spacing, elaboration, testing), followed by activities and assignments to encourage an examination of their own learning beliefs and misconceptions. Then they brainstorm the best ways to communicate this information to peers, and plan for how they will utilize these strategies.

    Metacognitive development is also a natural side effect of frequent, low-stakes, cumulative quizzing. Testing is not only an effective learning strategy, it also provides metacognitive feedback about the state of one’s own knowledge. I encourage students to use testing for both purposes, knowing that the best way to avoid the fluency illusion (believing you have learned something because it seems familiar or easy) is to take a test that requires effortful retrieval from memory. Further, in classes with major tests, I administer a post-exam metacognitive debrief activity aimed to help students understand if they were overconfident, how they studied, the reason(s) they answered items incorrectly, and a preparation plan for the next exam.

    6.              Transparent Course Design with Intentional Scaffolding

    I have improved my communication with students regarding the goals and objectives in my courses, and the purpose behind how I structure the class and assess student learning. In other words, I think about course design in a more integrated way, connecting learning objectives to teaching strategies and assessments. I work to enhance the clarity and detail of my syllabi and assignment instructions, making them as purpose-driven and explicit as possible. To close the loop on this process, one of my favorite activities is to ask students on the last day of class to reflect on the course objectives, evaluating their progress, and identifying components of the course that helped them improve.

    With regard to scaffolding, I remind myself that each of us is in a developing state of expertise (to borrow growth-mindset language). Some students in my class will need a lot of support and opportunities to get to the level of expertise I expect, and others will need less. The best remedy for this, in my opinion, is to offer early and frequent opportunities for formative feedback. Complex assignments can be broken down into scaffolded components, with feedback at every step. This approach leads to both higher-quality end products and a more positive learning experience for students. It is also a step toward a more inclusive classroom that allows for students of diverse backgrounds and abilities to grow.

    A central theme that unites all of the above is something I express to my students early and often: I care about your learning. Learning, by definition, is about change. And I am committed to nurturing that growth in my students, as well as in myself as an ever-evolving practitioner. In this way, I can maintain high standards in my courses while helping students feel informed and supported in their efforts to achieve success. In turn, they can (and should) have high standards for me, including expectations of preparation, availability, clear and consistent communication, prompt feedback, authentic and enthusiastic engagement, and—maybe most importantly—ongoing efforts to improve. After all, teaching is about change too.

    Jennifer A. McCabe is a Professor in the Center for Psychology at Goucher College in Baltimore, Maryland, where she has taught since 2008. She just completed her 15th year of full-time teaching, having also taught at Marietta College in Ohio. She earned her B.A. in Psychology from Western Maryland College (now McDaniel College), and her M.A. and Ph.D. in Cognitive Psychology from the University of North Carolina at Chapel Hill. She has taught courses including Introduction to Psychology, Cognitive Psychology, Human Learning and Memory, Statistics, Research Methods, and Seminar in Cognition, Teaching, and Learning. She has won teaching excellence awards from Marietta College and Goucher College. Her research interests include memory strategies, metacognition, and the scholarship of teaching and learning. She has been published in Memory and Cognition, Teaching of Psychology, Scholarship of Teaching and Learning in Psychology, Instructional Science, Frontiers in Psychology, Journal of Applied Research in Memory and Cognition, and Psychological Science in the Public Interest. Supported by Instructional Resource Awards from the Society for the Teaching of Psychology (STP), she has also published two online resources for psychology educators on the topics of mnemonics and memory-strategy demonstrations. She has served as a Consulting Editor for Teaching of Psychology, and is currently a Consultant-Collaborator for the Improve with Metacognition project.

  • 30 Mar 2019 12:00 PM | Anonymous

    By Teresa Ober, Kalina Gjicali, Eduardo Vianna, and Patricia Brooks

    To be informed and responsible citizens, students should be able to make sense of data—and in this day and age, we live in a world with an abundance of it!  As such, developing students’ quantitative literacy (QL) has become one of the overarching goals of undergraduate education (Sons, 1994). QL is considered “an aggregate of skills, knowledge, beliefs, dispositions, habits of mind, communication, capabilities, and problem solving skills that people need in order to engage effectively in quantitative situations arising in life and work” (as cited in Steen, 2001, p. 7). QL often involves applying mathematical thinking skills to real-world data with the purpose of drawing informed conclusions about issues of personal and/or societal concern (Elrod, 2014). Students who possess strong QL skills need not have strong computational backgrounds, but should be able to identify and interpret quantitative relations (e.g., in visual graphs), organize quantitative information (e.g., in spreadsheets) and communicate effectively about the relevance of quantitative data in everyday life (Blair & Getz, 2011). It has been argued that QL is most effectively taught when embedded across the curriculum given a critical component of its application involves identifying quantitative relations in varied real-world contexts (Hughes-Hallett, 2001). Embedding QL in college classes across disciplines helps students develop skills they will need to engage effectively in work and life beyond college. In this regard, instruction around QL that uses psychology content can support development and application of practical quantitative reasoning skills outside of the classroom. Hence, psychology departments are increasingly recognizing the need to teach QL as a core cross-curricular requirement (Lutsky, 2008). This means using data and mathematical thinking in all psychology courses, and not just in statistics and research methods courses.

    In this post, we offer some perspectives on how to promote QL across the psychology curriculum. Some of the tools and ideas for integrating QL activities into psychology courses were presented during a recent GSTA-sponsored workshop held at the Graduate Center of the City University of New York on March 6, 2019. During the workshop, Kalina Gjicali, PhD candidate in Educational Psychology, presented best practices for using visual graphs to help students develop quantitative concepts and skills in interpreting data. Dr. Eduardo Vianna of LaGuardia Community College shared resources developed through the Numeracy Infusion Course for Higher Education (NICHE) / Numeracy Infusion for College Educators (NICE), a consortium of educators who share a common mission of promoting quantitative reasoning across various college-level courses. He also shared information about a new CUNY-wide project to improve college students’ QL skills and his own experiences in teaching QL in an introductory (general) psychology course. Teresa Ober, PhD candidate in Educational Psychology, described sources of secondary data and free open-source statistical programs that can be used to develop students’ data analysis skills

    Engaged Pedagogy and Quantitative Literacy

    Educational research suggests that QL is best taught through student-centered, progressive pedagogies that promote active and inquiry-based learning (Lowney, 2008). In particular, studies have demonstrated the following strategies to be especially effective for teaching QL (Carver et al., 2016): (a) active learning, in which students are engaged learners rather than passive recipients of information; (b) inquiry-based learning, which emphasizes conceptual thinking rather than rote skills and memorization of facts as well as the use of problems and examples that are relevant to real-life situations; and (c) the use of technology to analyze actual data in real-life situations. According to constructivist learning perspectives (Cobb, 1994; Fosnot, 1996; Keeling, 2004), students learn most effectively when they explore new concepts and ideas while working out solutions to meaningful problems and considering the implications of research findings. In other words, students should figure out for themselves how new information, concepts, and ideas relate to their existing systems of knowledge and beliefs, and have opportunities to revise and expand their views in response to new knowledge.

    Strategies for Teaching Quantitative Literacy

    Interpreting Graphs

    One way to build QL skills is to focus on data and visual representations of data. Introducing effective graphic displays of data into college lectures can take the focus away from text-heavy slides that summarize information as if it were established fact (as opposed to research findings that may be in need of replication) and instead towards student-centered learning and knowledge construction. By being presented with appropriate visual representations of social science data, students can be expected to (Beaudrie et al., 2013):

    • Articulate their ideas

    • Express themselves with precision  

    • Ground their observations in evidence  

    • Test claims and hypotheses  

    • Participate in civil discourse

    • Represent what they are ill-equipped to see

    • Recognize and weigh uncertainty

    • Construct a context to attract interest and to inform critical thinking

    You can build QL with your college students with the free online feature “What’s Going On in This Graph?”created by The New York Times Learning Network in partnership with the American Statistical Association. Updated on a weekly basis, this resource features graphs of different types and within different contexts, such as varied topics from labor and automation to teen smoking habits (see figure below) that can be used to ask students the following questions:

    1. What do you notice?

    2. What do you wonder?

    3. What’s going on in this graph?

    4. What are the implications for ________ (e.g., understanding health risks of teenagers)?

    All releases are archived, so instructors can use previous graphs anytime. Visit this introductory post and this article about how teachers use this powerful activity.

    Analyzing Data in-Class

    Hands-on opportunities to work with actual data can open many doors for students, especially for those who have had limited experiences in data analysis and have anxiety about it. The concept of using secondary data to teach students about psychological science is not a novel one (see Sobel, 1981), but has received reinvigorated interest due to the vast amount of open access data currently available. In thinking through an in-class data demo, it might be useful for instructors to consider these questions:

    1. How is this data source meaningful to students within the course?

    2. What tools are available to students to help them analyze the data?

    3. What strategies/resources can be made available to students to help them interpret the data?

    4. How can we apply the findings from the data to everyday life?

    The Data

    Selecting a dataset for a data demo project is a crucial first step, and will depend on course content as well as the skills and reasoning abilities that you want your students to develop. Resources abound, with data sources including Kaggle, UNData, OECD, IES, and more than several amazing OSF repositories (e.g., EAMMi2) as well as data provided by local and regional government agencies. In choosing the dataset, consider what topics might be of interest to students, what problems/questions they can use the data to address, and whether there is sufficient documentation to support student learning. For example, for a course covering language development, the CHILDES database (part of Talk Bank) is an invaluable resource. This database contains transcripts of parent-child conversations in a variety of languages, often with accompanying audio or video, and includes datasets for children growing up in multilingual environments as well as datasets with various clinical populations (e.g., developmental language disorder, autism spectrum disorder, hearing loss). This resource includes CLAN software for analyzing conversational interactions and manuals to help you get started.

    Another option for integrating data collection into instruction is to ask students to complete a brief survey during class time. GoogleForms is a very convenient way to collect and present such data quickly, but other survey programs can work just as well. Asking students to complete a short-form survey may be an effective way to introduce them to a dataset by helping them become familiar with the actual scales used in the original study.

    The Tools

    Considering which statistical software programs are available and accessible to students is critical. While many undergraduate psychology courses use proprietary programs like SPSS or STATA to teach statistics, whether students own a license or actually have access to such programs off-campus is often questionable. Thus, it might be more advantageous for students long-term to consider free and open source programs, such as JASP or R. Sometimes the more sophisticated statistical programs may not even be necessary for teaching QL. Rather, in many cases, using a spreadsheet application such as GoogleSheets or Excel might actually be sufficient for teaching basic statistics (DiMaria-Ghalili & Ostrow, 2009). Many students have access to Excel on their personal computers, but benefit from instruction on how to use it to make pivot tables or charts.

    The Interpretation

    In preparing a lesson around the use of secondary data, it is important to consider students’ prior knowledge, skills, and interests to ensure that the instruction is developmentally appropriate. You might start by distinguishing research questions that relate to frequencies (How often?), associations (Are X and Y related?), or causal relationships (Does X cause Y?) as this can lead to a fruitful discussion of how to fit one’s analytic approach to the research question at hand. Students may need instruction to decide what sorts of graphs are appropriate for different types of data (e.g., line graphs, bar graphs, scatterplots). This can lead to further discussion of how to present findings in APA format, determine statistical significance, and interpret p-values.  Along the way, you might consider outliers, skewed distributions, and various threats to the validity of the research, such as the representativeness of the sample. As you guide your class in interpreting research findings, allow for spontaneity by offering students opportunities to test their own hypotheses and develop ideas for future research.

    The Significance, … or rather, Relevance

    Finally, it might be helpful to consider what possible applications can come out of the findings presented in class. Even if the findings might seem intuitive, walking students through the process of analyzing and interpreting the findings should ultimately lead them to feel empowered in working with data. When findings are non-significant and hypotheses are not supported, students have opportunities to learn that this sort of “productive failure” is part of the research process. For this reason, using secondary data as opposed to artificially generated data can lead to a more practical learning experience, particularly when resources for conducting secondary data analysis are plentiful.


    Strengthening QL has been recognized as an imperative of undergraduate education, with students best served when instructors use an “across-the-curriculum” approach to ensure they have sufficient opportunities to develop QL skills. Like any well-implemented curriculum, teaching QL necessitates planning. When datasets are analyzed or in-class demonstrations are conducted, instructors should take extra precautions to ensure that the lessons achieve their objectives. Lack of clarity during a demonstration, improper analyses, or technical problems can greatly interfere with learning opportunities when conducting in-class demonstrations. Nevertheless, we hope the resources described above may at the very least offer some initial inspiration for incorporating QL instruction into all of your courses.



    Teaching Tools


    Beaudrie, B., Ernst, D. & Boschmans, B. (2013). First semester experiences in implementing a mathematics emporium model. In R. McBride & M. Searson (Eds.), Proceedings of SITE 2013-Society for Information Technology & Teacher Education International Conference (pp. 223-228). New Orleans, Louisiana, United States: Association for the Advancement of Computing in Education (AACE). Retrieved March 21, 2019 from

    Carver, R., Everson, M., Gabrosek, J., Horton, N., Lock, R., Mocko, M., … Wood, B. (2016). Guidelines for assessment and instruction in statistics education: College report, Alexandria, VA: American Statistical Association.

    Cobb, P. (1994). Where is the mind? Constructivist and sociocultural perspectives on mathematical development. Educational Researcher, 23(7), 13-20.

    DiMaria-Ghalili, R. A., & Ostrow, C. L. (2009). Using Microsoft Excel® to teach statistics in a graduate advanced practice nursing program. Journal of Nursing Education, 48(2), 106-110.

    Elrod, S. (2014). Quantitative reasoning: The next "across the curriculum" movement. Association of American Colleges & Universities Peer Review, 16(3), 4. Retrieved online:

    Fosnot, C. T (Ed). (1996). Constructivism: Theory, perspectives, and practice. New York, NY: Teachers College Press.

    Hughes-Hallett, D. (2001). Achieving numeracy: The challenge of implementation. Mathematics and democracy: The case for quantitative literacy, 93-98.

    Keeling, R. (Ed.). (2004). Learning reconsidered: A campus-wide focus on the student experience. Washington, DC: American College Personnel Association and National Association of School Personnel Administrators.

    Lowney, K. S. (Ed.). (2008). Teaching social problems from a constructivist perspective, New York: W.W. Norton.

    Lutsky, N. (2008). Arguing with numbers: Teaching quantitative reasoning through argument and writing. Calculation vs. context: Quantitative literacy and its implications for teacher education, 59-74.

    Sons L. R. (1994). Quantitative reasoning for college graduates: A complement to the standards. Mathematical Association of America. Retrieved online:

    Steen, L. A. (Ed.). (2001). Mathematics and democracy: The case for quantitative literacy. Report prepared by the National Council on Education and the Disciplines. Retrieved online:

    Author Bios

    Patricia J. Brooks is Professor of Psychology at the College of Staten Island and the Graduate Center, CUNY and GSTA Faculty Advisor.  Brooks was recipient of the 2016 President’s Dolphin Award for Outstanding Teaching at the College of Staten Island, CUNY.  Her research interests are in two broad areas: (1) individual differences in language learning, (2) development of effective pedagogy to support diverse learners.​

    Kalina Gjicali is a doctoral candidate in Educational Psychology at The Graduate Center, CUNY and a Quantitative Reasoning Fellow for the University at the Quantitative Research & Consulting Center (QRCC).

    Teresa Ober is a doctoral candidate in Educational Psychology at the Graduate Center, CUNY. Teresa is interested in the role of executive functions in language and literacy. Her research has focused on the development of cognition and language skills, as well as how technologies, including digital games, can be used to improve learning.

    Eduardo Vianna, Professor of Psychology, has taught at LaGuardia since 2005. He has a Ph.D. in developmental psychology from the GC- CUNY after completing his medical studies in Brazil. Building on recent advances in Vygotskian theory, especially the Transformative Activist Stance approach,  his works focus on research with transformative agendas. His recent recent work includes applying critical-theoretical pedagogy to build the peer activist learning community (PALC), which was featured in the New York Times. In 2010 he received the Early Career Award in Cultural-Historical Research by the American Educational Research Association and currently he is chief editor of Outlines Critical Practice Studies and  Co-PI in the NSF grant "Building Capacity: A Faculty Development Program to Increase Students' Quantitative Reasoning Skills.’
  • 13 Mar 2019 1:03 PM | Anonymous

    By David Kreiner, Ph.D., University of Central Missouri

    On a cosmological level, time may be infinite, but we constantly run out of it in our daily lives. I have seen students and faculty struggle with it. I have certainly experienced it myself:

    • When my class time is up but I haven’t finished everything I wanted to.
    • When I’m planning a 16-week class and there’s just not enough time.
    • When I thought I could finish a draft of a paper in one afternoon but I didn’t even get close.

    Similarly, your students may:

    • have trouble meeting course deadlines;
    • fail to use effective study methods because they don’t have enough time;
    • be unrealistic about allocating time for the different components of a major project;
    • or find that they are about to graduate before they had a chance to accomplish all their goals.

    I propose that we look to the rich literature on the psychology of time in the same way that we have looked to the science of learning for more effective studying and teaching methods. I will describe one example to illustrate what I mean, but there is much more out there. If only we had time to explore it all!

    Kahneman and Tversky (1979) defined the planning fallacy as a tendency to underestimate how much time we need to complete larger tasks and overestimate the time we need for smaller tasks. We tend to be confident in these estimates – confident, but wrong (Buehler, Griffin, & Peetz, 2010). Think about how this affects your plans for a large project like your thesis or dissertation. Also think about how your students might struggle with finishing a project on time, or why they might run out of time and submit work that is less than their best.

    Fortunately, there is research on how to estimate more accurately how much time it will take to do something. One strategy is to avoid anchoring effects, in this case anchoring on the present when making a time estimate. LeBouef and Shafir (2009) found that people could make better estimates if they identified a future date at which they thought they would finish instead of estimating how many days from now.

    Another way to make more accurate time forecasts is to consider how much time similar tasks took in the past (König, Wirz, Thomas, & Weidmann, 2015). It also helps to think about possible obstacles that can cause delays (Buehler et al., 2010). When your student is estimating that she can knock out that paper in three hours, she may not be considering possible interruptions, technology issues, or finding out that the key article she needs is not available full-text.

    Imagining from the perspective of an observer can also improve accuracy (Buehler et al., 2010). What would your friend say about your plan to complete the literature review of your dissertation in one week?

    We might ask whether making better time estimates is that important. It doesn’t speed anything up or save time, right? But if our estimates are inaccurate, we will make mistakes in budgeting our time. Other things may fall through the cracks – sleep, for example – which could affect our well-being and success. One way to improve our relationship with time is to get a better handle on how much time we need. The research suggests that we can get better at it.

    At the upcoming APS-STP Teaching Institute, I will share a few other examples of how we can make use of the literature on the psychology of time. I hope to see you there …. if you can find the time!


    Buehler, R., Griffin, D., & Peetz, J. (2010). The planning fallacy: Cognitive, motivational, and social origins.  In P.Z. Mark & M.O. James (Eds.), Advances in experimental social psychology (Vol. 43, pp. 1-62). New York, NY: Academic Press. doi: 10.1016/S0065-2601 (10)43001-4

    Kahneman, D., & Tversky, A. (1979). Intuitive prediction: Biases and corrective procedures. TIMS Studies in Management Science, 12, 313-327.

    König, C.J., Wirz, A., Thomas, K.E., & Weidmann, R.Z. (2015). The effects of previous misestimation of task duration on estimating future task duration. Current Psychology, 34(1), 1-13. doi: 10.1007/s12144-014-9236-3

    LeBouef, R.A., & Shafir, E. (2009). Anchoring on the “here” and “now” in time and distance judgments. Journal of Experimental Psychology: Learning, Memory, and Cognition. doi: 10.1037/a0013665

    David S. Kreiner is Professor and Chair of the School of Nutrition, Kinesiology, and Psychological Science at the University of Central Missouri, where he has taught since 1990. He earned his B.A. in Psychology and Ph.D. in Human Experimental Psychology from the University of Texas at Austin. He has taught courses including General Psychology, Orientation to Psychology, Research Design & Analysis I & II, History of Psychology, Advanced Statistics, Cognitive Psychology, and Sensation & Perception.  His research interests include language processing, memory, and the teaching of psychology.  He often collaborates with students on research projects and has coauthored publications and conference presentations with undergraduate and graduate students. 

  • 12 Feb 2019 3:32 PM | Anonymous member (Administrator)

    By Ashley Waggoner Denton, Ph.D., University of Toronto

    As an undergraduate student, I learned that being primed with the stereotype of professor could make me act smarter, that I might deplete my self-control if I refused the tempting cookies presented to me at a meeting, and that if I conducted a study whose findings were unexpected, I could just rewrite my introduction and tell a new story. Thankfully, I also learned how to learn, which prevented me from becoming trapped in a knowledge time warp. Psychological “facts” have an estimated half-life of seven years (Arbesman, 2013). Seven! This means that by the time you have completed graduate school, half of the psychological findings you learned as an undergraduate will have been updated, revised, or deemed outright wrong. Such is the nature of scientific progress. However, this helps drive home the point that one of our most important goals as teachers is to help our students develop into lifelong learners who will be able to continue learning (effectively and across a range of topics) long after they have left our classrooms. 

    The term learning how to learn comes from Fink’s taxonomy of significant learning (2013), which includes six major categories: foundational knowledge, application, integration, human dimension, caring, and learning how to learn. If you are not familiar with Fink’s model, I highly recommend checking it out (see recommended reading below). Learning how to learn takes a number of different forms, and in each of the courses I teach, at least one of these forms is emphasized. The first form is learning how to be a better student, the second form is learning how to construct new knowledge in a discipline, and the third form involves helping students become “self-directing learners” (Fink, 2013, p. 59), the key to which involves the ability to critically reflect on one’s own learning. Below I provide some examples of how I encourage these various forms of learning how to learn in different courses that I teach.

    Learning How to Be a Better Student

    Without a doubt, this form of learning how to learn gets emphasized the most in my Introductory Psychology class. In order to encourage my students to adopt better learning strategies, I don’t just teach them what psychologists have learned about effective study strategies (see links to helpful resources from the Learning Scientists below). Instead, I first let the students tell me (via a survey or in-class response system) how they typically study, and then I frame the lesson around their responses. Specifically, I address the limitations of their common habits (e.g., cramming) and study strategies (e.g., re-reading), explain why these strategies seem appealing despite their limitations, and then provide the students with more effective replacement strategies (e.g., retrieval practice), including an overview of the research that has been done on each strategy and specific tips for how to implement these strategies in Intro Psych. Rather than presenting this information a preachy way (“everything you are doing is wrong and I know better!”), I want the students to recognize that they are not alone in using these common strategies, and that I completely understand why they use them, but that I have good reason to believe they can learn even more effectively by adopting some new strategies.

    In a similar vein, I also present students with research on the effects of technology use on learning (both in the classroom and when they are studying on their own). Again, I ultimately leave it up to the students (as self-directing learners!) to make their own decisions, but I arm them with the information that will allow them to make informed decisions about whether they take notes with a laptop or on paper, where they should leave their phone during class or a study session, and so on. A detailed slide-deck that can be used for covering this material in your own classes is available via a link below.  

    Learning How to Construct New Knowledge

    We all know that students should practice writing and get hands-on experience doing research as much as possible. Encouraging this form of learning how to learn is standard in any research methods or laboratory class. But it’s worth spending a moment to reflect on the type of inquiry and knowledge construction students are engaging in across all of your courses. Are they being pushed enough? Are they being asked to truly write and think “like a [social/cognitive/clinical etc.] psychologist,” or are they simply getting practice using some new terms and theories? As an example, students in my Intro to Social Psychology course used to complete an assignment where they analyzed an event from a social psychological perspective. It was a perfectly good assignment, but what were the students actually learning? Application is important, don't get me wrong (it has its own category in Fink’s model), but I have since replaced this assignment with an observational study project where the students must develop a hypothesis; design a study; collect, analyze, and interpret their data; and write everything up in a final APA-style report. This new assignment obviously requires a lot more scaffolding and resources, but the students walk away from the course not just being able to apply the knowledge they’ve learned, but with the ability to potentially contribute to that knowledge base. Additionally, they are in a better position to recognize the limitations of drawing conclusions from single studies and the importance of replication and reproducibility.

    Learning How to Become Self-Directing Learners

    Most of what our students do, they do because we tell them to. For example, students in my Social Psychology Laboratory class complete a research proposal because that is what they are told to do. They develop their own research question and hypothesis and design their own experiment, which all seems perfectly “self-directed.” However, the task falls short of its goal if the students fail to engage in a critical reflection of their learning throughout this process. The way that I encourage this (in this class and others) is through the use of reflective learning journals. Reflection changes everything. When students are encouraged to reflect on their learning it can improve their self-monitoring and goal-setting capabilities as well as lead to changes in study habits and other skills. It encourages students to focus more on the how and why of their learning, rather than simply on what they are learning. Students who are able to critically reflect on their learning are much more likely to develop into self-directing learners, so I do whatever I can to give my students practice with reflection. More information on how I have implemented reflective learning journals into my statistics course can be found in the Waggoner Denton (2018) article listed below.

    Self-directing learners are able to recognize gaps in their understanding and formulate plans for filling those gaps. As a developing teacher, you are likely to start noticing all sorts of gaps in your knowledge and skills (all those things that manage to go unnoticed until we actually have to explain it to someone!). The next time you go about filling in one of those gaps, take some time at the end to reflect on the process you just undertook. Who did you talk to? What did you read? Could you have done it better or more efficiently? And how did you know how to do these things? Would your students know what to do?

    Below are some resources that may be useful as you consider how to incorporate certain aspects of learning how to learn more fully within your own courses!


    Additional Reading/Resources:

    • Reflective Learning Journals in Statistics: Waggoner Denton, A. (2018). The use of a reflective learning journal in an introductory statistics course. Psychology Learning and Teaching, 17, 84-93. DOI: 10.1177/1475725717728676



    Arbesman, S. (2013). The half-life of facts. New York: Penguin. 

    Fink, D.L. (2013). Creating significant learning experiences: An integrated approach to designing college courses.  San Francisco: Jossey-Bass.

    Ashley Waggoner Denton is an Associate Professor, Teaching Stream in the Department of Psychology at the University of Toronto. She received her Ph.D. in Social Psychology from Indiana University and completed her bachelor's degree at the University of Toronto. She teaches courses including Introductory Psychology, Social Psychology, Statistics, and the Social Psychology Laboratory. She also supervises undergraduate research projects that examine questions related to the social psychology of teaching and learning.

  • 17 Jan 2019 10:00 AM | Anonymous

    By Teresa Ober, Elizabeth Che, and Patricia J. Brooks, GSTA Leadership

    In the Fall 2018, the GSTA distributed a short survey to gather informal input about the preferences of graduate students with regards to a possible mentorship program. We were specifically interested in gauging whether graduate students would be interested in a program where they would be mentored by early career psychologists.

    There have been past efforts to apply mentorship programs within the framework of existing professional organizations. The Society for the Teaching of Psychology has recently formed a mentorship program pairing early career psychologists and advanced graduate students with more senior full-time faculty. The mentorship program was featured in a recent GSTA blog by Dr. Diane Finley which describes some of the history and benefits of mentorship. Mentorship is thought to encourage networking, collaboration, and sharing of instructional resources and ideas. In addition to these benefits, mentorship has also been shown to relate to decreased work-family conflicts and increased job satisfaction in the long-term (Tenenbaum et al., 2001).

    To date there has been relatively little systematic and quantitative research on mentorship as an evidence-based practice (Troisi, Leder, Stiegler-Balfour, Fleck, & Good, 2015), and virtually none on mentorship of graduate students in psychology. Existing research on professional mentorship between faculty and students indicates that it consists of two distinct components: instrumental and psychosocial help (Tenenbaum et al., 2001). “Instrumental help” involves coaching and training. “Psychosocial help” includes empathizing and counseling. In conducting this survey, we were particularly interested in the types of instrumental help that graduate students might seek in a mentorship program, as well as what types of mentorship models and modes of communication would be preferred. Research in this area is necessary to understand whether graduate students have unique needs and interests as potential mentees.


    We sought to identify interests related to professional mentorship among graduate students, particularly those with a background in teaching. Last fall (October 12-November 7, 2018), the GSTA distributed a short survey to gather informal input about the preferences of graduate student instructors that would help to guide recommendations for a possible mentorship program. Graduate students were invited to participate in the survey through various STP channels of communication, including the STP and GSTA social media pages (e.g., Facebook, Twitter) and email (STP/DIV2 listserv). The survey received a total of 78 responses, summarized below.

    Sample Characteristics

    Graduate student respondents were asked various questions about their areas of specialization and years in graduate school. Approximately one out of four respondents indicated their field was social psychology (25.6%). There were equal proportions of respondents from clinical and cognitive psychology (14.1%), followed by developmental psychology (9.0%), and neuroscience (7.7%). Nearly half of respondents were in the second (23.1%) or third (25.6%) year of their program, followed by those in the first year (16.7%). Respondents in their fourth (12.8%), fifth (14.1%), sixth (6.4%) or seventh or higher (1.3%) year in the program represented about one out of three respondents.

    When asked about their post-graduation plans, more respondents indicated an interest in working at a research-based institution (61.5%) than at a teaching-based institution (41.0%); note that respondents could indicate interest in both. Respondents indicated a preference to work at a public institution (59.0%) over a private institution (42.3%). There appeared to be a negligible difference in the preference for working at a large institution (46.2%) as opposed to a small institution (44.9%). A minority of respondents indicated an interest in working at a nonprofit organization post-graduation (2.6%).

    Interest in a Mentorship Program

    Over 9 in 10 of the respondents indicated either a potential interest (51.3%) or a definitive interest (39.7%) in being mentored by an early career psychologist. The remainder (9.0%) did not indicate an interest, nor did they provide an explanation for why they did not have an interest.

    The survey asked about their topics for mentorship; note that respondents could indicate interest in multiple topics. Half of the respondents indicated they would like mentorship to focus on how to prepare for the job market (50.0%). Others indicated they would also like mentorship around teaching (11.4%), how to prepare work for publication (10.0%), research advisement (10.0%), engagement in service (8.6%), innovation in the field (1.4%), and jobs outside of academia (1.4%). Some indicated they were open to and interested in mentorship for all of the above noted topics (5.7%).

    Respondents were asked what types of mentorship models they would most prefer. Over half of the respondents indicated an interest in dyadic mentorship (60.6%), while a minority indicated interest in a group mentorship model (35.2%). Other respondents were content with either option (4.2%).

    The survey asked respondents how frequently they would like to communicate with their mentor(s). It appeared that most respondents preferred meeting about once a month (46.5%) or twice a month (36.6%). Less popular, though preferred by some respondents, involved communicating on a weekly basis (11.3%). Even fewer respondents indicated an interest in communicating less frequently, or about once every three months (5.6%).

    Respondents also indicated their preferred channels of communication for a potential mentorship program, with respondents given the option to select multiple options. The vast majority of respondents indicated a preference for email (88.7%) or in-person (85.9%) communication. About half also indicated a preference for video calls (47.9%). Other respondents indicated phone (36.6%) or text messaging (36.6%) as preferred channels of communication as well.

    Summary of Key Findings

    Mentorship opportunities may be especially beneficial for graduate students as they try to gain a professional footing. Such opportunities can connect graduate students studying psychology to others in the field, possibly leading to long-term collaborations. Without a previous systematic investigation into the needs and interests of potential graduate student mentees, we distributed this survey to gather this information. The responses indicated a preference for a mentorship program structured around a dyadic mentor-mentee arrangement. The results also suggested that respondents preferred communication on an approximately monthly or bi-monthly basis. The most popular means of communication appeared to be email and in-person; however, over a third also indicated a preference for video call, phone, or text messaging. These findings shed light on the effective ways to organize a mentorship program.

    With regards to the focus of the mentorship, given that we recruited through STP and GSTA communication channels, we were surprised that fewer than half of the respondents (41.0%) indicated interest in a teaching-based position post-graduation, and even fewer (11.4%) indicated interest in mentorship around teaching. Most of the respondents were in the earlier years of their program (first to third), suggesting that there is demand for a mentorship program geared towards students in the earlier phase of their doctoral studies.

    Our findings pointed towards a greater interest and need among graduate students for mentoring on issues centrally related to preparing for the job market. Recent news articles have featured the many challenges associated with entering the job market (Smith, 2019), particularly for those who are pursuing careers in academia (Smith, 2017). Given the context of such a competitive job market even for highly skilled individuals, a successful mentorship for graduate students should incorporate both aspects of help described by Tenenbaum et al. (2001), with a focus on preparing students with the instrumental knowledge necessary for applying for jobs, and the psychosocial support to buffer the challenges and inevitable rejections they will experience in the process.

    Participation in mentorship may create expectations around the education and training of graduate students as a continuous endeavor (Epstein & Hundert, 2002). Such a perspective may be particularly helpful for advanced graduate students and recent post-graduates who anticipate preparing for a competitive job market, particularly in academia. Professional mentorship opportunities may be one way to better prepare recent graduates for a long-term career, rather than forcing them to abruptly recalibrate their job ambitions. Having such opportunities beyond the formal student-advisor relationship may be one means by which institutions and organizations can promote a culture where the continual development of professional competency is held in high regard.


    Epstein, R. M., & Hundert, E. M. (2002). Defining and assessing professional competence. Journal of the American Medical Association, 287, 226–235.

    Smith, N. (2017, Oct 4). Too many people dream of a charmed life in academia. Bloomberg, Retrieved from

    Smith, N. (2019, Jan 9). Burned-out millennials need careers, not just jobs. Bloomberg, Retrieved from

    Tenenbaum, H. R., Crosby, F. J., & Gliner, M. D. (2001). Mentoring relationships in graduate school. Journal of Vocational Behavior, 59(3), 326-341.

    Troisi, J. D., Leder, S., Stiegler-Balfour, J. J., Fleck, B. K., & Good, J. J. (2015). Effective teaching outcomes associated with the mentorship of early career psychologists. Teaching of Psychology, 42(3), 242-247.

    Teresa Ober is a doctoral candidate in Educational Psychology at the Graduate Center of the City University of New York. Teresa designed and created Manuscript Builder in completion of the certificate program in Interactive Technology and Pedagogy at the Graduate Center. She is interested in the role of executive functions in language and literacy. Her research has focused on the development of cognition and language skills, as well as how technologies, including digital games, can be used to improve learning.

    Elizabeth S. Che is a doctoral student in Educational Psychology at the Graduate Center, CUNY and the GSTA Deputy Chair. Her research interests include individual differences in language development, creativity, and pedagogy.

    Patricia J. Brooks is Professor of Psychology at the College of Staten Island and the Graduate Center, CUNY and GSTA Faculty Advisor.  Brooks was recipient of the 2016 President’s Dolphin Award for Outstanding Teaching at the College of Staten Island, CUNY.  Her research interests are in two broad areas: (1) individual differences in language learning, (2) development of effective pedagogy to support diverse learners.​

  • 12 Dec 2018 11:09 AM | Anonymous

    Carolyn Stallard, Ph.D. Student, The Graduate Center, CUNY 

    This past October, I had the privilege of volunteering for and presenting at the 9th Annual Pedagogy Day held at the Graduate Center CUNY and organized by members of the GSTA. At first I was concerned; would my non-psychology-focused presentation go over well at a conference hosted by the Psych department? The conference was open to anyone, but as a music educator would I benefit from the presentations?

    As the proceedings began, it was immediately evident that my concerns were for naught; from the start I could see that this was a conference of great benefit to anyone interested not only in psychology but also in higher education pedagogy in general. I truly feel as if I learned something useful from every presentation I attended. In particular, I loved the message of keynote speaker Sue Frantz (click here to see a recording of the keynote address), who challenged the audience, when preparing to teach an Intro Psych course for undergraduates, to consider what their real-life neighbors might need to know about psychology. Though the topic at hand was purely psychology, I found the question to be relevant to any course taught to students who are not planning to major in a particular subject. When teaching an introductory course, it is important for educators to remember that the students enrolled will someday be our neighbors –construction workers, educators, dentists, pilots – so what do they need to know about the subject being taught?

    My contribution to Pedagogy Day was a bit different than Sue Frantz’s. Rather than challenge the audience to think about what non-majors might need to know, I challenged them to think more creatively about the collection and retrieval of information, particularly to encourage/improve research and critical thinking skills. I shared information on a mod – a modification of a pre-existing game – called Superfight by Jack Dire. In the original version of this self-described “game of absurd arguments,” players draw three each of two kinds of cards: Characters and Attributes. For instance, a player may have a hand containing three characters – Abraham Lincoln, Superman, and Godzilla – and three attributes – the ability to shrink in size, a water gun, and a beard full of bees. Each player then chooses a character and an attribute from their hand and “battles” through debate against another player to determine who would emerge victorious in a fight. A third player judges the debate and decides who wins.

    Figure 1. Example of the original Superfight game by Jack Dire

    My version, called Music Melee, takes this debate concept and the accompanying game mechanic of randomization and applies it to music history. In the basic version, character cards contain information about a significant musician we’ve studied in class, and attribute cards are instruments. Students form groups of three (two to actively debate, one to judge), choose an artist and instrument from their hand, and make an argument for why their choice would outperform their opponent in a battle of the bands.

    Figure 2. Example of Superfight cards created by students

    To make the game run smoothly in my 50-student class, after each of the initial three players in a group has battled each other (creating a “best two out of three” scenario) the two losing players join the winner as support, resulting in a new team of three. This pod of three then finds two other pods to debate (with a larger arsenal of characters to choose from now) and again, the two losing pods join the winner, creating teams of nine. This continues until there are three massive teams debating in the room. At this time, for the final round, only the team champion (the single person on the team who has yet to lose any debate) can speak for their team in a live debate in front of the class.

    This game is useful for a number of reasons:

    1. Students create the cards. In the week(s) leading up to the game, each student must choose a couple of artists to research. Thus, the students themselves create the playing deck, requiring very little preparation from the instructor. I give the students a number of specific points they must include on their cards, treating each as a small research project, but another professor might adjust this in a different way.
    2. It encourages not only information retrieval, but also critical thinking. Often, the instrument/musician pairings are not ideal; the best combo a student might produce from their randomized hand might be, for instance, Umm Kuthum with a didgeridoo. In this situation, students must get creative in their arguments, not only recalling information learned in class but also considering what factors might create a persuasive argument (in this example, a student might argue that because Umm Kulthum is considered such an important vocalist in Egypt and the broader Middle East, she may have the lung power to master a didgeridoo better than say, Jimi Hendrix, who did not play any aerophone instruments).
    3. Students learn what does or does not make a strong argument, which can later be applied to research papers. In my class, we spend time before the game having a full class discussion to determine what does or does not make a strong argument. We create a sort of rubric on the board, which students can then refer to when making or judging an argument during gameplay.
    4. The game can be easily modified to up the ante. For instance, you may add in random situations/scenarios mid-debate: Suddenly the musicians are performing in 18th century Venice or can only perform acoustically; how does this affect the argument being made?
    5. Likewise, the game can be modified to teach a number of different subjects. At Pedagogy Day I asked the audience for ideas, and a number of instructors mentioned the idea of creating a deck of psychologists as the character cards, with various research variables/items (B.F. Skinner’s rats could be a card, for instance) as the attributes. 

    My interest in games as a tool for learning has led me to become involved in the CUNY Games Network. As part of the steering committee for the CUNY Games Network, I firmly believe that game-based learning is a useful method for teaching any subject in higher education. Music Melee is a simple game, but whenever I introduce it in class the students latch on; I am always surprised when students who have been quiet all semester suddenly come alive during their debates.

    This is just one example of game-based learning (GBL) and “modding.” To learn more, I highly encourage anyone interested to take advantage of the plethora of resources and fellow pedagogues interested in GBL in higher education here at CUNY. The CUNY Games Network is open to anyone (CUNY or non-CUNY, as long as your interest is GBL in higher education), and we welcome those with previous GBL experience as well as those just starting out. To sign up for the mailing list and read more about games in the classroom, visit There, you will also find information about the upcoming CUNY Games Conference 5.0, which will be held January 18, 2019 at Borough of Manhattan Community College (click “Events” to find the conference info.).

    Carolyn Stallard is an Ethnomusicology student and Senior Teaching Fellow at the Graduate Center and an adjunct instructor at Brooklyn College.  She is a member of the steering committee for the CUNY Games Network and researches game-based learning in higher education.

  • 29 Nov 2018 3:30 PM | Anonymous member (Administrator)

    By Laura Freberg, Ph.D., California Polytechnic State University, San Luis Obispo

    Choosing the right materials to support your course is one of the most important decisions an instructor must make. Whether you choose your own materials independently, serve on a textbook decision committee, or administer a course for which materials are chosen for you, this decision will have significant implications for the quality of the course experience for you and your students.

    Today’s instructors face a bewildering array of choices, which has both an upside and a downside. On the positive side, having many choices is always a good thing, as courseware can be tailored to a specific group of students with characteristics best understood by their professor. On the downside, reviewing the many available materials represents a significant commitment in instructors’ time, which is already in very short supply.

    The point of this article, then, is to help instructors focus on some of the key variables involved in courseware decisions. In the interest of transparency, I am actively authoring two traditional textbooks for Cengage as well as serving as lead author for a lower-cost electronic textbook for TopHat. I have also worked with the APS Wikipedia Initiative and even sat on a panel for APA on “Teaching Without Textbooks.” While I fully appreciate the success of open source software, the typical model for open education resources (OER), I am also willing to pay for outstanding proprietary products like those from Adobe. The point is to obtain the tools that best fit your needs.

    Pros and Cons

    Each type of courseware has its own set of strengths and weaknesses. By examining these, we can begin to identify areas where the materials differ.

    The Traditional Textbook              

    The traditional textbook provides the complete package. Not only do you get a heavily peer-reviewed document, which minimizes errors, but publishers generally provide testbanks, instructors’ manuals (e.g., lecture notes, activities, lists of TED talks and videos), online homework and enrichment activities, and PowerPoints. This option is literally “Doc in a Box.” The textbooks are also updated at regular intervals. This might not be essential in algebra, but it’s a must in sciences like psychology.

    On the negative side is the elephant in the room—cost. Many people do not know why the costs of traditional textbooks are high, which contributes to the mentally lazy vilification of traditional publishers as “evil corporations.” The actual printing cost of a book is relatively little. Most of the cost represents work by a fairly large group of people, not just the authors. We have development editors who help shape our content, copyeditors, photo researchers, indexers, and sales teams. Hundreds of paid peer reviewers scour our work for errors. Still others produce the testbanks and other ancillaries, which are also reviewed. The publisher must ensure that online materials present a positive user experience, leading to an ever-increasing need for expensive IT people and equipment. Traditional publishers are held to a very high standard of accessibility and ADA compliance, which is also expensive.

    Publishers of both textbooks and books for the general public face these same challenges. What makes life much harder for textbook publishers is the impact of the used book and rental markets. Who sells or rents their copy of Harry Potter? Sacrilege! The relatively tiny printing cost is the only variable that depends on the number of books produced. The remaining costs that I mention must be paid regardless of how many books are sold. If you spread these costs over single payers (each reader of Harry Potter or an assigned textbook), traditional textbooks would be as affordable as Harry. This doesn’t happen, of course, as the vast majority of students assigned a textbook will purchase used or rental copies. In spite of “don’t sell” notices adorning instructors’ desk copies, some instructors sell them anyway for extra spending money. Amazon affiliates and the campus bookstores are the main recipients of this largesse. They pay the student very little at buyback, store the book on the shelf for a few days, then sell it again at nearly new prices. None of this money, of course, goes back to the publisher to offset any production costs. What makes the traditional textbook expensive is the fact that new book sales represent a relatively small fraction of overall users.

    Image : The upper graph demonstrates the effects of the used book market on publisher sales and the lower graph demonstrates the effects of both the rental and used book market over the six semester lifespan of a textbook (Benson-Armer, Sarakatsannis, & Wee, 2014). Publishers only recoup production costs from new sales, not total use of their intellectual property. 1 = Assumes percentage of students who do not acquire textbooks shifts from 20% to 8% due to introduction of the rental market (which reaches 30% penetration).

    Just as the music industry did to avoid the hemorrhage that was Napster, publishers have moved to electronic versions of textbooks, or the iTunes model. The advantage to the publisher is not due to lower printing costs, which are quite small anyway, but rather to the spreading of production costs across more users because resale is limited. Electronic books are typically half or less of the cost of the print version and will go lower as adoption of electronic books increases. Incidentally, the idea that students learn better from print than electronic books appears to be a myth, at least according to careful research presented by Regan Gurung (2017). For students who insist on something they can hold in their hands, publishers make loose-leaf versions available for a few extra dollars over the electronic book cost.

    Electronic materials have another advantage. Many students do not buy their assigned text. When instructors use electronic books and their associated homework, they know exactly who does and does not purchase a textbook. The analytics associated with the electronic books even show you how much time each student spends with the materials, which can be very helpful when advising a student doing poorly in your class.

    We still don’t know how well the revolutionary Cengage Unlimited model is going to work. This model (nicknamed the Netflix model) allows students access to ALL Cengage titles while paying a slightly higher fee than they would for a single electronic title. Students rarely pay attention to the publishers of their assigned textbooks, but this model might make them more sensitive to that. If successful, we can anticipate all of the major publishers will begin offering this service.

    Before jumping to conclusions that students ALWAYS want low-cost or free textbooks, consider the following. Many of the lowest income students are receiving federal grants that include the purchase price of new textbooks. Washington surely doesn’t want the textbooks back at the end of the term, so the student is free to sell the books. This provides a significant income for these students, who object strenuously to OER or electronic books with no resale value. Affluent students often follow a similar strategy. Their parents pay for books but do not consider their resale value, so the student gets a bit more discretionary income without having to ask for it.

    A final key aspect of the decision to use a traditional textbook is whether the instructor actually needs a textbook. If you provide students with comprehensive study guides and PowerPoints, and assess exclusively on that content, it should come as no surprise that students either do not purchase the text or complain about having to purchase the text. Texts are only valuable if they are used.

    Low-Cost Textbooks        

    All of us receive frequent emails from indie textbook publishers, both print and electronic, that promise a less expensive option of perhaps $40 or so. These options vary substantially in quality and in the support provided for the instructor.

    My all-electronic TopHat project probably represents the higher end of this classification in terms of quality. We have very capable development editors and were supported by a copy editor and graphics designer. Costs are cut by using photos that were open source. Photo permissions for traditional textbooks can be very pricey. I actually purchased the “blue/black or gold/white” illusion dress from eBay and wore it for a photo for my Cengage books (and yes, I can attest to the fact that it really is blue and black) when the difficulties of obtaining permission to use the original photo were insurmountable. Another cost savings was the relatively sparse pre-publication review. We had one person per chapter review our work prior to publishing compared to hundreds in traditional text publishing, which I must say made me nervous. TopHat assumes that crowdsourcing will fix problems after publication, a point of view shared by many open educational resource (OER) advocates.


    Image 1 Caption: To save money, I actually purchased a version of the blue/black/white/gold dress on eBay so we could include a photo in my textbooks. The original photo is on the left and I am wearing the dress on the right. My husband, armed with his cell phone camera, and I walked around the house until we found lighting that allowed us to duplicate the illusion.

    In spite of these cost savings, TopHat charges $61 for our book as opposed to the $95 cost of the basic Cengage electronic books. So even when you do not have to worry about the effects of used and rental textbooks on your sales, there is an underlying truth about the costs of producing quality materials – it can only go so low.

    Open Education Resources (OER)                

    The largest advantage of OER is cost to the student. Who doesn’t like free stuff? Having paid for my own college education, I am not unsympathetic to this. Instructors can endear themselves to their students and administrations by using OER, resulting in higher evaluations. Their classes get special recognition in the registration process, in a not-so-subtle public shaming of instructors who prefer traditional or low-cost materials. Note that OER is not “free” at the institutional level. Colleges and universities spend considerable resources on grants and support personnel for OER that reduce support for other functions.  

    OER advocates tell me that cost is not the only advantage. You can bring in a multitude of materials in addition to a free text and instructors can adapt the material to fit their needs. I agree, but there’s nothing stopping you from doing these things WITH the electronic versions of traditional texts, which have the capability of embedding videos, assessments, activities, and documents seamlessly. In many cases, the publishing company staff will set up the course the way the instructor wants it.

    On the downside, if traditional textbooks are “Doc in a Box,” OER materials are stone soup. It might be possible to simply use materials as-is, but I know very few people who do that. Some people enjoy the revising and curating process, but others simply can’t fit additional course prep time into a heavy research and service load. Additionally, instructors might not have the necessary skillset for these tasks. Being a great teacher and researcher does not automatically make you a courseware expert.

    Continuity and updating of OER materials by entities such as OpenStax is somewhat vague. Foundation money might provide for original production costs, but who has ownership of the ongoing health of these materials?

    As mentioned previously, OER materials, unlike traditional text materials, are rarely assessed for ADA compliance. You don’t have to look at too many materials before finding some with blinding areas of inaccessibility. Bringing materials into compliance is expensive and time-consuming, and lawsuits are even more so. Until now, OER materials have been given a “pass” not enjoyed by traditional publishers, but that is not likely to last forever.

    Head-to-Head Comparisons

    In 2017, Regan Gurung undertook direct comparisons between OER materials in introductory psychology (NOBA) and three traditional textbooks (Hockenbury & Hockenbury, Cacioppo & Freberg, and Myers & DeWall). You might have heard of numerous studies that show that students using OER have the same level of achievement as students using traditional textbooks, but Gurung carefully points out and controls for the design flaws in those efforts. He concluded that traditional textbook “users enjoyed their classes less and reported learning less than OER users but still scored higher on the quizzes.” In other words, just because students seem happier with OER does not mean they are learning more.

    OER are usually presented on campus as a positive contributor to social justice. This claim might be tempered if in fact student outcomes are superior with traditional materials. If less affluent, less-prepared students are more likely to be offered materials that result in lower performance, this is actually working in the opposite direction of true equity.

    Making the Decision

    As anyone teaching heuristics knows, the human decision-making system is subject to flaws. We can possibly avoid some of those flaws by thinking more systematically. One such approach is a utility model, where we assign ratings and weights to variables of interest and let the math point us in the right direction. If you’d like to try that out, I’ve provided a model you can use or adapt to your own needs.

    Begin by considering each “Feature” and assigning it a “Weight,” with “5” being “very important” and “1” being “not important at all.” Next, examine your sample materials, and assign each a “Rating,” with “5” being “very good” and “1” being “not very good.” Then, all you have to do is multiply Weight by Rating and sum the results. Ideally, this should give you an idea about which type of materials is likely to bring you the greatest level of satisfaction.

    No one type of courseware is likely to meet the specific needs of all students and their instructors. As empiricists, we should be willing to experiment. If what we’re doing isn’t working, we should try other things. Ultimately, our feedback and the feedback from our students can help producers of content to develop even better materials.


    Benson-Armer, R., Sarakatsannis, J., & Wee, K. (2014). The future of textbooks. Retrieved from

    Gurung, R. A. R. (2017). Predicting learning: Comparing an open educational resource and standard textbooks. Scholarship of Teaching and Learning in Psychology, 3(3), 233-248.

    Laura Freberg is a Professor of Psychology at California Polytechnic State University, San Luis Obispo, and adjunct instructor for Argosy University Online. Dr. Freberg received her bachelors, masters, and Ph.D. from UCLA and conducted her dissertation research with Robert Rescorla of Yale University. She is serving as the 2018-2019 President of the Western Psychological Association. 

Powered by Wild Apricot Membership Software