TeachLab with Justin Reich

Joel Breakstone

Episode Summary

Justin Reich is joined by Joel Breakstone, director of the Stanford History Education Group (SHEG), and co-lead on Beyond the Bubble and Civic Online Reasoning projects. Together they discuss assessing online information, the research of SHEG, and the methods used by fact checkers to determine the validity of information. “For one thing, when they did a search, they didn't immediately click on the first search result, which is what many of the Stanford students, and even some of the historians did. Instead, the fact-checkers engaged in what we refer to as click restraint. They paused, and they looked at the snippets about the search results. And they took a moment to check out the URLs, and then made a decision about where they should begin their search. Because that initial click often greatly influences the kind of search that you end up conducting.” - Joel Breakstone

Episode Notes

Justin Reich is joined by Joel Breakstone, director of the Stanford History Education Group (SHEG), and co-lead on Beyond the Bubble and Civic Online Reasoning projects. Together they discuss assessing online information, the research of SHEG, and the methods used by fact checkers to determine the validity of information.

“For one thing, when they did a search, they didn't immediately click on the first search result, which is what many of the Stanford students, and even some of the historians did. Instead, the fact-checkers engaged in what we refer to as click restraint. They paused, and they looked at the snippets about the search results. And they took a moment to check out the URLs, and then made a decision about where they should begin their search. Because that initial click often greatly influences the kind of search that you end up conducting.”  - Joel Breakstone

 

In this episode we’ll talk about:

 

Resources and Links

Learn more about Stanford’s Civic Online Reasoning!

Check out their most recent article: Student's Civic Online Reasoning: A National Portrait

Learn more about Beyond the Bubble!

Check out Justin Reich’s book, Failure To Disrupt!

Join our self-paced online edX course: Sorting Truth from Fiction: Civic Online Reasoning

Join our self-paced online edX course: Becoming a More Equitable Educator: Mindsets and Practices

 

Transcript

https://teachlabpodcast.simplecast.com/episodes/joel-breakstone/transcript

 

Produced by Aimee Corrigan and Garrett Beazley. Recorded and mixed by Garrett Beazley

 

Follow TeachLab:

Facebook

Twitter

YouTube

Episode Transcription

Justin Reich:                 From the home studios of the Teaching Systems Lab at MIT, this is TeachLab, a podcast about the art and craft of teaching. I'm Justin Reich. Today's guest is Joel Breakstone, Director of the Stanford History Education Group also called SHEG, where he co-led the development of Beyond the Bubble, an assessment website that measures students' historical thinking; and Civic Online Reasoning, a free online curriculum designed to help students tell the difference between reliable and unreliable information on the internet.

                                    Joel got his doctorate from the Stanford Graduate School of Education. He's the recipient of the Larry Metcalf Exemplary Dissertation Award from the National Council for the Social Studies, and he's a former high school teacher. Joel is also a co-conspirator for our current edX course, Sorting Truth From Fiction. Welcome, Joel, so glad to have you on TeachLab.

Joel Breakstone:           Thanks so much for having me, Justin.

Justin Reich:                 So, Joel, we're doing this short series on kind of the information crisis, the search crisis among young people in schools. How would you briefly characterize the challenges that we're up against?

Joel Breakstone:           They're substantial. Without a doubt, we need to dramatically reform the way that we are preparing young people to evaluate the overwhelming amount of information that streams across their devices. All of our research over the course of the last six years shows that students have great difficulty in making sense of even very straightforward sources that they encounter online.

                                    And beyond just their struggles, there is something even more concerning, which is that often students are making mistakes because they're using the strategies that they've been taught to use; outdated approaches to evaluating online sources that appear on websites all across the internet.

                                    Things like the CRAAP test that tells students to focus on parts of websites that are easily manipulated by folks who want to obscure their real intentions. So, we both need to work to prepare students with skills and strategies that will lead them to better evaluations of sources, as well as get rid of these problematic approaches that are so widely available and used.

Justin Reich:                 So, Joel, take us back to the beginning of your interest in this work. How did you stumble across this as a problem or how did you get interested in it?

Joel Breakstone:           So, as the name of our organization suggests, we were primarily interested in history education for a very long time. And we were involved in creating curriculum and then assessments around history education, and specifically historical thinking, how students make sense of documents. And we created a set of history assessments that move beyond multiple choice questions, that ask students to examine one or two sources and write short constructive responses to give teachers a better sense of their thinking.

Justin Reich:                 These assessments, they're a little bit like for people who know this term: mini-DBQs, mini-Document Based Questions. Look at a couple of primary sources, make an assessment, start kind of building it towards an argument.

Joel Breakstone:           Exactly, but rather than having to do a whole bunch of documents and write a full blown essay, you just read one or two sources and write a few sentences, so the teachers can get a quick sense of student understanding. And so, we had built those assessments, and one day we got an email from a foundation in Chicago, the McCormick Foundation.

                                    And they wondered whether we could make some assessments that focused on how young people evaluate information online, that they had been doing a great deal of grant-making in this realm. And there weren't many tools to measure whether or not those grants, and the programs that were funded by the grants, were having the impact that they claim to.

                                    For the most part, the evaluation tools that existed were things like self-report; student saying, "Yes, I now evaluate sources more carefully." Or the teacher saying that they enjoyed having this program come in. And so, they asked us to go about developing short tasks that were similar to what we had done in history, but instead more focused on digital information. And so, over the course of a couple of years, we drafted a whole range of tasks that asked students to evaluate real sources.

                                    So the kinds of things they would encounter when they go online: Tweets, Facebook videos, posts in online forums, all of the sorts of things that appear if you're looking for information online. And as we drafted them, we piloted them with students all across the country, from middle school to college. And as we did that, we were alarmed at how poorly students did. They were having difficulty doing even the most basic tasks.

                                    For instance, middle school students couldn't distinguish between ads and news stories. High school students couldn't figure out who is behind a given website. And college students were unable to figure out when a PR firm had actually developed a website, rather than it being a think tank. We released the findings of that work in November of 2016, shortly after the presidential and when the term fake news had become ascendant.

                                    And suddenly, there was an enormous amount of interest in our work because we had data that showed that students all across the United States were struggling when it came to digital evaluations. And so, there was first a story in the Wall Street Journal, and then that was followed by reports in NPR, and then dozens of other outlets all across the country and around the world.

                                    And one of the questions that came up most frequently was: "How do we help students do better?" And we didn't have a great idea of how to do that. What we had focused on assembling were tasks to give us a sense of what students could do and what they struggled to do. And we didn't have a curriculum available. And so, that was what we turned our attention to next.

Justin Reich:                 So, as assessment designers, one of the things that you were sort of thinking is like, "Oh, let's make a bunch of tasks that are of average difficulty. Maybe we'll have some easier tasks and some harder tasks." But the thing you're kind of looking for is like a nice bell curve distribution of the quality of students' responses. But you did not find a bell curve distribution. You found a distribution skewed very badly towards very limited capacity in this domain.

Joel Breakstone:           That's exactly right across the board students struggled. And we had to keep on adjusting our thinking about what students could and couldn't do. We would make tasks that we thought were way too easy, like that task with the ads versus news stories. And then students would struggle on that too. And so we, we kept on having to tweak our approach to try to figure out ways to give us a sense of where students could make sense of digital content and where they were still struggling.

Justin Reich:                 So, I want to get into the methods here a minute because I find them interesting, because I think I know the next part of the story a little bit. But one of the things that you and your colleague Sam Wineburg have worked on in the past is you have a method. I don't know the term that you use for it, but it's something like cognitive task analysis, which is basically asking people to do some kind of task while they talk aloud about doing that.

                                    So, your colleague, Sam Wineburg is very well known amongst History teachers for asking historians and asking History students to sort of do historical thinking exercises, analyzing primary sources and things like that, while talking aloud about what they're doing. So, you can get some kind of sense, not just of whether people get the right answer and not on these tasks, but what kind of reasoning moves are they using? And that's kind of where this research goes next. Is that right?

Joel Breakstone:           Yeah, that's exactly right. Sam and our colleague, Sarah McGrew led a study where they asked three groups of people to evaluate a series of unfamiliar sources. And all three of those groups were folks who were very smart. It was freshmen at Stanford University, practicing historians from different institutions, and then a group of professional fact-checkers from the nation's leading news outlets.

                                    And what Sarah and Sam did was to ask each person to individually sit in front of a computer and to look at a series of sources and to explain how they would go about evaluating, and then to record their screen as they did it. And what stood out was that the professional fact-checkers were way, way better at evaluating those sources and to coming to correct conclusions about the reliability of those sources. So, one of the tasks directed the people to article from MinimumWage.com.

                                    MinimumWage.com indicates that it's a project of the Employment Policies Institute, which has a .org website, and they claim to be a nonprofit nonpartisan think tank. However, if you leave their website and search for the Employment Policies Institute, you will quickly come across a series of sites that explain that the Employment Policies Institute is actually a project of a public relations firm run by a guy named Richard Berman.

                                    And he's been profiled by outlets, ranging from Slate to John Oliver. All of which point out that what he does is serve as a front group for industries that are seeking to influence policy debates. All of the professional fact-checkers were able to identify Berman as being behind the MinimumWage.com site. However, the historians and the Stanford students did much worse. Less than half of the Stanford students ever found out who was really behind the site.

                                    And just over half of the historians did. And those who did find out took much longer than the professional fact-checkers. And what distinguished the professional fact-checkers approach was that, instead of dwelling at that site and reading closely, they almost immediately left that site, opened a new tab in their browser and searched for information about the original site.

                                    That's what Sam and Sarah referred to as lateral reading. So, rather than reading up and down on the original website, they read across the tabs in their browser to find better information and to have a better sense of whether or not they should trust this article from MinimumWage.com.

Justin Reich:                 What can you tell me, as you were talking with Sarah and Sam, about the experience of sort of looking at the data from these fact-checkers? I imagine there must've been some moment where people were looking at this going, "Wow, these people are doing something totally different from these other smart folks and it's working much, much better."

Joel Breakstone:           Yeah. It was really striking that... Again, the historians and Stanford students, as you say, are really smart people. And I think it's important to emphasize this. This is not an effort to derive them. They're smart people, and really, they're all of us. None of us were prepared for the current digital moment. And we need to figure out ways to find better information.

                                    And by looking at the practices, the professional fact-checkers use, we can have a better sense of ways to effectively sort through the overwhelming amount of information that we encounter online. And so, there's the move of lateral reading, but there are other things that professional fact-checkers did. For one thing, when they did a search, they didn't immediately click on the first search result, which is what many of the Stanford students, and even some of the historians did.

                                    Instead, the fact-checkers engaged in what we refer to as click restraint. They paused, and they looked at the snippets about the search results. And they took a moment to check out the URLs, and then made a decision about where they should begin their search. Because that initial click often greatly influences the kind of search that you end up conducting.

Justin Reich:                 Did you or your colleagues have follow-up conversations with the fact-checkers to figure out how they learned these effective practices? Did you sense that people discovered them individually by trial, by error, trial and error. Or is there like a fact-checker guild, which is sort of spreading around effective practice? How did the people for whom this works, figure it out?

Joel Breakstone:           Yeah. No, I think that it was trial and error, but more than anything, it was the nature of their jobs, is that what they have to do is constantly confirm information or refute it. And as a result, they did not rely on their own intelligence to know whether or not something was accurate. Instead, they turn to the broader web. That's the incredible power of the internet; is there are almost limitless sources out there. And what you need to do is try to find better ones.

                                    And so, I think that really what led to the differences was the nature of their jobs; is that they can't trust what something says. They have to go and verify it. And the way they do that online is to seek out other sources. In contrast, Stanford students have spent a great deal of their educational lives, being directed in schools to read closely and carefully and deeply.

                                    And in many cases, that leads to good grades and high scores on standardized tests, but can lead you badly, badly astray when you go online and sources are not what they say they are. And so, what we need to do is redirect our attention from the approach to close reading. And instead, what we need to do more of is some strategic ignoring so that we're not diving deeply into a site until we know that it's worthy of our time and attention.

Justin Reich:                 Strategic ignoring as a critical skill for the internet age. So, I know that there's a bunch of studies that you've done since that early discovery work, that assessment work, that work on lateral reading. But you have a big new study out today, May 26, an educational researcher which is one of the leading journals of the American Education Research Association. Tell us what the origins of this study were, and sort of how it adds to our body of research about this challenge.

Joel Breakstone:           Yeah, so this new study is in many ways, a followup to that original set of research we did and released in 2016. After the events of the presidential election of 2016, there was a great deal of interest in issues related to misinformation and disinformation. And many states begin to draft and pass legislation calling for media literacy instruction. And we wondered, "What has happened? Are students better equipped now to make sense of online information?" And so, we set out to-

Justin Reich:                 [inaudible] like a four-year checkup.

Joel Breakstone:           Exactly. Yeah. It is a checkup for sure. How are things going? And so, we sampled a sample of students that reflects the population of high school students in the United States. And it's the largest study to date where students were asked to evaluate real online sources.

                                    And I think that's a really important difference here that so often assessments of digital literacy ask students hypothetical questions: "What might you do?" Or show them print sources. What we did was to send students out onto the internet and ask them whether or not they trusted the kinds of sources they encounter all the time. We ended up with a sample of more than 3000 students.

                                    And when we went through and graded all of their short responses to looking at those sites, we were deeply alarmed. Once again, students struggled, they were unable to perform even the most basic evaluations. More than 90% of students were unable to find out that a climate change denial website was receiving funding from fossil fuel companies.

                                    More than half of students trusted or said that a video claiming to show voter fraud during the 2016 presidential primaries was strong evidence of voter fraud in the United States. But if you do a quick Google search to find out information about that video, you can quickly come across a whole series of articles, including ones from Snopes and the BBC, pointing out that those videos have nothing to do with the United States.

                                    And in fact, they show voter fraud in Russia. If students believe that that is evidence of voter fraud in the United States, the health of our democracy is imperiled. So, unfortunately, the study gives clear evidence that we really need to reinvest in our approach to digital literacy and really rethink how we're going about doing it.

                                    Because once again, we saw students engaging in these deeply problematic ways of evaluating sources. They looked at things like the URL of a website. So a.org website was more trustworthy than a.com. Again, that's the kind of guidance that appears on so many of these checklists of website credibility online. Other students did a move that we refer to as checking facts.

                                    So, rather than fact-checking to figure out whether or not a source was credible, they just looked at individual facts. So, they looked at that same article from MinimumWage.com, that I described before. And in some cases, students were just deciding whether or not it was trustworthy based on if it matched, if the details about minimum wage were correct.

Justin Reich:                 So, yeah, because people who are disseminating misinformation, they mix misinformation with correct information. They're not creating fully fantastical stories. They're lining up true facts in misleading ways or in biased ways, or they're sort of sliding mistruths and half-truths in with other true things. It's really not about whether any individual claim is accurate. It's more about the perspective the whole source has.

Joel Breakstone:           Without a doubt, that is the nature of the problem. And moreover, the people who are seeking to deceive are very aware of the ways to make their site or their claims seem credible. So, include links to other sources, especially credible sources. Mix in the problematic content with trustworthy claims.

                                    Similarly, have an about page that touts the credentials of the individuals who are working there. Indicate that it's non-partisan or nonprofit, all of these things that people associate with trustworthiness. And as a result, if you want to deceive people, it can be pretty easy to do so.

Justin Reich:                 Yeah. Basically, the bad guys have all of the same checklists that teachers are using to teach their students how to evaluate websites. And they're just going through those checklists, those markers of credibility, and saying, "Oh yeah, we can make up those. We can fabricate those." I mean, to me, it's such an interesting problem because there are so many issues in education for which the solution is enormously complicated.

                                    If you have students coming in to the eighth grade at a third grade reading level, there is no obvious straightforward way that you help that student get the support, build the motivation, develop the skills to get to the eighth grade reading level to cover five years of reading in one year or something like that.

                                    There are all kinds of really hard challenges in educational systems. By contrast, this challenge, it's daunting in its scale, but the sort of solution set seems relatively simple. The things that these fact-checkers are choosing to do, these basic strategies of lateral reading of click restraint, they're not that hard to teach.

                                    And in fact, you have evidence that when you teach these skills in relatively short periods of time to middle school students, to high school students, they get better at them. What's your sense of sort of what it will take for you to be able to do another of these checkups studies in 2024, and see some real progress on the issue.

Joel Breakstone:           I think what's going to be required is that there is actually a shift in the way we go about teaching web credibility. We can't continue doing what we've been doing. As you note, we now have conducted a whole series of studies that show us that we can move the needle. We can indeed improve students' ability to sort fact from fiction online, but we need to figure out a way to make that happen at scale.

                                    We can't just continue on having students use the CRAAP test mixed in with some lateral reading. We need to reorient our approach to teaching digital literacy, or else things are going to look really similar in the future. What we think is most important is that we figure out a way to embed these ways of reasoning across the curriculum. We can't just make this an add-on, a one-off led by a librarian.

                                    If we do that, it's going to be a barnacle on the whole of an already bloated curriculum. We need to figure out a way to embed this way of thinking and reasoning into regular classes, so that if you're in a science class and being asked to go online to do research, that it becomes routine to do a brief bit of lateral reading to determine whether or not the sources that you are finding are trustworthy ones.

                                    Or to have students look at graphs in a math class to determine whether or not the axes on the graph had been manipulated in order to further a particular point in a policy debate. And that's what's going to be required to give students the opportunities to practice. Because even in our studies where we have implemented these kinds of ways of reasoning in the curriculum, there's a lot of room for growth.

                                    We see statistically significant results, but students still aren't getting every question right. They're doing better than the control group students who did not complete the curriculum. But students still have a hard time knowing what kinds of sources they should examine when they engage in lateral reading.

                                    When they are doing click restraint, they don't always know, what are the markers of better sites? And part of that is just practice, that you need a lot of repetitions to become familiar with the kinds of questions you should be asking, and the kinds of sources that you routinely encounter, as well as the kinds of sources that are trustworthy.

                                    We don't become aware of all of that kind of content knowledge immediately. So, we got to give students a lot of time to practice. Like any complex way of thinking, it takes a while to develop expertise. But in just a little bit of time, we can make a big difference by reorienting students to not just trust information at face value.

Justin Reich:                 And those repetitions probably do need the help and support of adults, mentors, experts, because doing this work is not like shooting free throws. You can shoot a thousand free throws and you can actually count how many of them go in and how many don't. When you're doing these kinds of sort of regular information search literacy practices every day, you don't always know whether or not you've accurately assessed a source.

                                    And you don't always see what kind of alternative approaches are to assessing a source. And that's really one of the ways that adults from all teaching disciplines, all of the adults who support youth learning, can be helpful here.

Joel Breakstone:           That's exactly right. What we need to do is model ways of examining sources for students, so that they have a sense of, "What does it look like to effectively sort through a search results page? How do I determine whether or not that source that claims to be trustworthy is indeed trustworthy?

                                    And how do I know what are the kinds of news outlets that I might consult when I'm engaged in some lateral reading?" Students are going to depend on the guidance of adults, and ultimately then their peers as well. As students become more skilled, they can begin to help each other as well to build out that knowledge.

Justin Reich:                 Joel, we should talk about one more persistent myth or misconception. And it's one that I've been interested in for a long time. So, when I was doing education technology, consulting with schools, sort of starting in 2008 and 2009, there was a relatively new website called Wikipedia, which teachers had a great deal of distrust of. It is encyclopedia that's just put together by average citizens of the web.

                                    And in 2008, I was trying to convince teachers, "No. Actually, it's not perfect. Every encyclopedia is not perfect, but this is a pretty good kind of first-go-to source of information." For students, it's probably not the only source that they should cite in their report, but it's a great place to start investigating a subject. But is your sense that teachers still, 15 years later have a kind of unwarranted suspicion of Wikipedia? And where does that come from?

Joel Breakstone:           Without a doubt, unfortunately. One of the tasks that we've now given to thousands of students asks them, "Which of two sites would be a better place to start their research about gun control." One of them is a duke.edu website, and another one is a Wikipedia page. The Wikipedia page has dozens and dozens of references that are linked.

                                    In contrast, the duke.edu site actually is somebody's personal page, where they have posted a tract from the NRA about myths regarding gun control. So, Wikipedia in this case is a far better starting place in terms of the types of information that you can find. However, overwhelmingly, students select the duke.edu site, because they say, "My teacher told me that I shouldn't trust Wikipedia," or "I learned that .edu sites are always better."

                                    And so, the myths and misconceptions that you described, this idea that anybody can edit Wikipedia pages and that they are totally untrustworthy, unfortunately persist. And partially that's due to these kinds of online guidance, but I think it's also just this deeply ingrained set of beliefs that ha have persisted for... You're talking about 2008. It certainly has persisted to the present moment.

                                    And part of it is that people don't have a full understanding of the way that Wikipedia works. Certainly it is not perfect, however, it can often be really great jumping-off place for research if you're using those references well and clicking through to find other information. Similarly, in many cases, the most prominent topics are on pages that are protected.

                                    There are a whole series of levels of protection. Students are unfamiliar with that as well. So, what we need to do is help students to understand the ways of using Wikipedia effectively. And then also, hopefully begin to help teachers to understand the ways that Wikipedia can be used wisely so that these myths about it being a totally unreliable source can hopefully come to an end sooner rather than later.

Justin Reich:                 One of the things I find compelling about your research is that, to some extent, we can sit here and describe what might be considered sort of theoretical reasons why Wikipedia is a good source. It has these protections. It has these references. There's a good editorial system in place.

                                    But the other thing that we now have from your research is when you watch the screen recordings of professional fact-checkers, who solve your problems in the shortest amount of time with a 100% or nearly a 100% accuracy, they use Wikipedia regularly.

                                    That we could just say, "Well, let's look at the practices of the people who always get these challenges, right?" And look, they use Wikipedia and they get to the outcomes that we want them to get at. Is that a fair summary?

Joel Breakstone:           Very well said. Yeah, that's exactly right. That we see that experts in search are using Wikipedia, that it is a powerful tool for quickly coming to better conclusions about online sources. And if we tell our students that they shouldn't use it, we're really depriving them of one of the basic tools.

                                    It would be kind of like telling a carpenter that they shouldn't use a hammer. Wikipedia is an indispensable tool. Again, it's not perfect, but it certainly can be incredibly useful if we understand how to use it well.

Justin Reich:                 Much like a hammer is not perfect, but they're useful in all kinds of circumstances.

Joel Breakstone:           Right.

Justin Reich:                 So, you probably have mixed feelings today on publication day, a sense of pride that some really high quality, really comprehensive research is being published. And then probably a sort of a sense of dismay that you have to chat with people like me and explain that the nation's young people, the nation's teachers, the nation citizens are probably not well equipped right now to handle the search challenges that they face in their day-to-day lives, and to handle them efficiently and effectively.

                                    But what kinds of things give you hope? What kinds of things make you think, "You know what? We could be a better at this in 2024 than we are in 2020, if we take these kinds of steps or move in this kind of direction."

Joel Breakstone:           I think there's several things that give me hope. The first is that all of our research shows that students want to effectively evaluate online sources. In most cases, they just lack the tools to do it well. So, it's not that we're getting responses from students saying, "I don't know, I don't care. I'm not interested." Instead, they are doing the things that they've been taught to do.

                                    They're looking at the URL, and thinking that somehow .org website is better than a .com. Even though anybody can buy a .org website or that they are just consulting an about page, unaware that the people who made the website control everything on the about page. They're doing the things we've taught them. And so, that tells me that we can teach them to do something different.

                                    And the interventions that we've done in high school and college classrooms show us that after just a short amount of time in the largest study that we've done so far with just under 500 students in Lincoln, Nebraska, we saw a real shift in the way that students engage with sources, that they begin to read laterally after doing these six one-hour lessons. And do they need practice? Sure.

                                    But it is really different to approach the web with the idea that you need to consult other sources to determine whether or not to trust something, rather than just accepting everything at face value. So, I am very confident that we can shift the way that students go about making sense of the information on their screens, but we have to do it.

                                    And that's on us as adults to figure out how we go about making these materials widely available, and importantly, supporting teachers to use them in their classrooms. Teachers have an enormous amount already on their plates. And so, we can't just layer some other responsibility onto everything else without some really well-planned and readily used materials.

                                    We can't leave it up to teachers to have to figure this out. We need to provide classroom-ready tools. So, I believe that's possible. But it certainly is going to require a whole scale effort to make those curricular materials available, and also to provide teachers with the support to effectively implement them in their classrooms.

Justin Reich:                 So, Joel, if listeners want to learn more about these lessons that you've put together, what should they check out?

Joel Breakstone:           So, they should check out our website, Civic Online Reasoning. The URL is cor.stanford.edu . All of the tasks that we've used in the studies we've talked about today are available there, as well as a whole series of assessments, all freely available.

                                    You just need to create an account that requires a functioning email address, and then you can check them all out. They are delivered as Google Docs and Google Forms, and you can make copies of them and use them with students or try them out yourself.

Justin Reich:                 Terrific. Well, Joel Breakstone from the Stanford History Education Group, it's been great having you on TeachLab. Thanks so much for joining us.

Joel Breakstone:           Thanks, Justin. It's been a pleasure.

Justin Reich:                 That was Joel Breakstone. His latest article with colleagues from the Stanford History Education Group and others, is Students' Civic Online Reasoning: A National Portrait. It's a disturbing study about how much young people struggle to sort truth from fiction online. But I think there's a hopeful dimension of this that we, as educators, can learn some more effective skills, and we can do this better and we can teach this better.

                                    I'm Justin Reich. Thanks for listening to TeachLab, especially on a stormy day here. We're in our home studios, you might've gotten some of the rolling thunderstorms in the background. You can check out our show notes for links to Joel and his colleagues' work on Civic Online Reasoning, and all the other resources that we discussed today.

                                    If you want to dig deeper into the topic of today's episode, you can join myself, Joel, and Sam Wineburg from Stanford University in sorting truth from fiction, Civic Online Reasoning. It's a free course available on edX while you'll learn the skills and practices of information literacy that folks like fact-checkers use to sort fact from fiction online.

                                    And if you're interested in issues of equity in schools, you can join myself and Vanderbilt Professor, Rich Milner, in a free self-paced online course called Becoming a More Equitable Educator: Mindsets and Practices. Through ingrained practice, you'll cultivate a better understanding of yourself and your students, gain new resources to help all students thrive, and develop an action plan to work in your community to advance the lifelong work of equitable teaching.

                                    You can find the links to both of these courses on edX and links to both in our show notes. So, enroll today. If you're making a list for your summer reading, you can find my book, Failure to Disrupt: Why Technology Alone Can’t Transform Education, at booksellers everywhere. And check out related content at failuretodisrupt.com .

                                    That's failuretodisrupt.com . Thanks again for listening. Be sure to subscribe to TeachLab, to get future episodes, and leave us a review, and let us know which think. This episode of TeachLab was produced by Aimee Corrigan and Garrett Beazley, recorded and sound mixed by Garrett Beazley. Stay safe. Until next time.