The Role of Maps in Research: Why ChatGPT Can’t Replace Librarians

The Role of Maps in Research: Why ChatGPT Can’t Replace Librarians

At UC Berkeley’s Institute of Governmental Studies Library (IGSL), there’s a rich collection of government documents—and among them, you can find fascinating maps of all sorts.

These maps come in all shapes and sizes, from historical land-use charts to zoning and infrastructure maps. Some are standalone, while others are embedded within larger government publications. But one thing is clear: maps require expertise to fully interpret—something that ChatGPT alone cannot replace.

When researching documents from the IGSL collection, I often use ChatGPT to analyze and extract insights. It’s great at breaking down complex texts, summarizing policies, or comparing jurisdictions. But when I tried to apply the same approach to maps, I realized something important: AI alone isn’t enough.

Why Librarian Expertise Matters

If I upload a document to ChatGPT, I can instruct it to analyze sections, extract key themes, or compare policies across different time periods. The same should apply to maps, right? Not quite. Unlike text, maps contain spatial data, historical context, and implicit details that aren’t always obvious—even to an AI.

Here’s where librarian expertise comes in. A librarian or a skilled researcher can tell you what to look for before you even begin. They can point out:

  • Key terms and geographic indicators that might not be obvious.

  • Changes in mapping conventions over time that affect interpretation.

  • Historical significance that AI might not pick up without deeper context.

ChatGPT can analyze the content of a map, but it doesn’t always understand why certain markings, legends, or boundary shifts are significant without proper context. That’s where human expertise is invaluable.

Pairing ChatGPT with Human Knowledge

Rather than relying on ChatGPT’s internal logic alone, the best approach is to feed it expert-driven instructions. Instead of simply asking, “Analyze this map,” you might first consult a librarian or researcher and refine your prompts based on their recommendations.

For example, if I’m looking at a historical zoning map from the IGSL collection, I might structure my ChatGPT request like this:

  1. Ask a librarian – What should I pay attention to when comparing zoning maps from different decades?

  2. Frame a more detailed prompt“Compare zoning changes between [X year] and [Y year]. Identify patterns in industrial growth, residential expansion, and infrastructure development.”

  3. Verify results – Cross-check ChatGPT’s findings against expert insights to ensure accuracy.

This approach bridges AI’s efficiency with human expertise, making research both faster and more reliable.

Final Thoughts

Maps are an essential part of government research, and while ChatGPT is a powerful tool, it works best when combined with librarian or human-in-the-loop knowledge. AI can help us process and extract insights, but real understanding comes from the people who have the expertise or spent years studying these materials.

If you’re using ChatGPT to analyze government documents, especially historical maps, if possible take the time enrage your local librarians or an expert. If you don’t have access to those, this is where knowledge profiles can be helpful—data models that mimic cross-industry experts. In the near future, I’ll share how-to tips on creating these knowledge profiles.

My name is Nick, and I enjoy teaching and speaking about the intersection of research, ChatGPT, and prompt engineering. My work focuses on developing easy-to-use frameworks and strategies that ensure AI doesn’t just generate answers, but also verifies and checks itself—helping researchers use ChatGPT more effectively and responsibly. If you have questions, need help setting up, or want to improve your prompts, feel free to reach out—I’d love to help!