Generative AI has lowered the barrier to entry for data analysis. More journalists can now work with datasets that previously required specialized skills.
But according to AP data editor Angeliki Kastanis, the fundamentals have not changed.
“The principles of data journalism are not new,” she said. “We look for transparency, reproducibility and accuracy. Those are things generative AI does not do well.”
In AP’s approach, every data collaboration now includes a basic set of questions:
- Was AI used?
- Why was it chosen?
- How was it applied?
- How were the results verified?
These questions are becoming essential as AI tools introduce new risks. Outputs can vary, decision-making is often opaque, and errors can be difficult to detect without technical understanding.
That does not mean AI has no role. It can accelerate workflows, particularly in areas like debugging code or extracting structured data from large document sets.
At Stanford’s Big Local News, AI helped transform hundreds of thousands of pages of police records into a searchable database. The key was transparency. Users could trace every data point back to its source.
Still, AI often creates new work rather than removing it. As Kastanis noted, AI-generated code can be more complex and require additional verification.
Actionable takeaways for newsrooms
- Establish clear internal guidelines for AI use in data workflows.
- Require verification steps for all AI-assisted outputs.
- Ensure journalists understand enough to evaluate, not just use, AI-generated results.
Questions for leaders
- Do we have clear standards for how AI is used and verified in our newsroom?
- Are we investing in skills that allow journalists to question outputs, not just produce them?



