Is it Time for School Psychologists to Embrace AI for Research?

AI for Research

Prefer to listen on the go? An audio version of this blog is available so you can catch up on the latest insights from the NASP Conference wherever you are.

Artificial intelligence (AI) is rapidly transforming how professionals access, analyze, and apply information. For school psychologists, who frequently need to research new interventions, assessment tools, or relevant literature, AI can provide significant advantages. Even though formal research isn’t a central part of most practitioner’s role, research-related tasks—like program evaluation and data-driven decision-making—are becoming increasingly common in the field. Leveraging AI tools for these purposes can save time and enhance the quality of information gathered.

One exciting development on the horizon is Google DeepMind's "AI Co-Scientist," designed to accelerate scientific discovery by serving as a collaborative research partner. Although not yet publicly available, its potential is noteworthy—it’s expected to assist in formulating hypotheses, designing experiments, and analyzing results. For school psychologists, this could translate to quicker evaluations of intervention effectiveness or more comprehensive reviews of large-scale assessment data. You can learn more about this forthcoming tool in Google's announcement.

In the meantime, several robust AI research tools are already available, providing immediate support for school psychologists seeking to deepen their understanding of various topics. Three platforms currently lead the AI-driven research space: OpenAI Deep Research, Google Deep Research, and Perplexity AI. Each has distinct features that can meet different needs. For a more detailed examination of these tools, I highly recommend Jordan Wilson’s Podcast EP 465: Deep Research Throwdown - Perplexity vs. Google vs. OpenAI.

Before discussing these tools, please note some important caveats:

  1. Hallucinations Are Inevitable – AI tools frequently generate false or misleading information. Never assume their outputs are entirely accurate without verification. Some tools are better than others (see the comparison below).

  2. Limited Access to Full-Text Research – Most academic papers are behind paywalls, so AI relies on abstracts, summaries, or publicly available sources, which can lead to missing details or misinterpretations.

  3. Unverified and Potentially Unreliable Sources – Much of the information AI retrieves comes from .com, .edu, or .org domains, which vary in reliability. Unlike peer-reviewed sources, these may contain bias, outdated data, or inaccuracies.

  4. Lack of Critical Evaluation – AI does not assess study quality, potential biases, or methodological rigor, leading to the potential use of flawed or retracted research.

  5. Fabricated or Incorrect Citations – AI frequently generates fake or incorrect references, including non-existent journal articles or incorrect author details. Always verify citations.

  6. Oversimplification and Loss of Nuance – AI may condense complex research findings in a way that removes essential context, impacting interpretation and application.

Below is a comparison highlighting key differences among these tools:

  • Accessible by free user accounts?

    • OpenAI: 2 uses/month (slated to come soon)

    • Perplexity: 5 uses/day

    • Google: No access

  • Number of uses allow (per month) for users with paid plans?

    • OpenAI: Pro plan - 100, Plus plan -10

    • Perplexity: Unlimited

    • Google: Unlimited

  • Which model is used?

    • OpenAI: OpenAI o3 FT

    • Perplexity: Undisclosed

    • Google: Gemini 1.5 Pro

  • Are file uploads supported?

    • OpenAI: Yes

    • Perplexity: Yes

    • Google: No

  • How many websites are searched per research (on average)

    • OpenAI: 10–30

    • Perplexity: 20–200

    • Google: 50–500

  • How much time is needed to conduct research?

    • OpenAI: Up to 30 minutes

    • Perplexity: 1–3 minutes

    • Google: Up to 10 minutes

  • How long are reports (approximately)?

    • OpenAI: 1,000–2,000 words

    • Perplexity: 400–800 words

    • Google: 1,500–3,000 words

  • How likely are hallucinations?

    • OpenAI: Least likely

    • Perplexity: Very likely

    • Google: Moderately likely

Understanding Perplexity AI

Unlike OpenAI and Google’s offerings, Perplexity AI is not a large language model (LLM) itself. Instead, it functions as an “answer engine” that draws on various LLMs, including GPT-4, Claude, and Mistral, to compile and synthesize information. While this allows it to reference a broad range of sources, the quality and consistency can vary depending on the models it accesses. Although some school psychologists appreciate Perplexity’s quick responses and breadth of information, I personally haven’t used it since Google and ChatGPT integrated web-searching capabilities, which I find more reliable. Still, it's important to acknowledge that preferences vary, and some colleagues find Perplexity indispensable.

Recommendations for School Psychologists

Most school psychologists aren’t conducting large-scale research studies, but the need to quickly gather information on new assessments, interventions, or evidence-based practices is common. I encourage you to experiment with these tools using a topic you’re familiar with—whether that’s a specific behavioral intervention or a new cognitive assessment measure. Testing these platforms with a known subject can give you a sense of how accurate and helpful each one is in practice.

Try it yourself: Pick a topic you know well and run the same query through OpenAI Deep Research, Perplexity AI, and Google Deep Research. How does the information compare to what you already know? Was it accurate? Did the tool miss anything important? Reflecting on these questions will help you determine which platform best meets your needs.

For example:

  • Need a literature summary on a new reading intervention? Google Deep Research’s expansive source base might suit you best.

  • Trying to draft a program evaluation report? OpenAI Deep Research could help with its reasoning capabilities and longer output.

  • Looking for a quick answer to a practical question? Perplexity’s speed may be appealing—but verify its sources carefully.

Why This Matters for the Field

AI isn’t just a trend; it’s a toolset that can expand our capacity to make informed, data-driven decisions. As program evaluation and research-related tasks become more prominent in our profession, staying familiar with these tools will be essential. Adopting them now not only improves efficiency but also ensures that school psychologists remain at the forefront of evidence-based practice.

Previous
Previous

AI in School Psychology: Insights from an Ohio Survey

Next
Next

Reflections on the 2025 NASP Conference: AI's Growing Presence in School Psychology