AI @ AP
Leveraging AI to advance the power of facts
The Associated Press was one of the first news organizations to leverage artificial intelligence and automation to bolster its core news report. Today, we use machine learning along key points in our value chain, including gathering, producing and distributing the news. Explore this page to learn more about the history of artificial intelligence at The Associated Press, our strategy around the technology and how we currently use it today.
We deploy a tool from SAM, a Canadian social media solutions company, to detect newsworthy events based on natural language processing (NLP) of text-based chatter on Twitter and other social media venues. SAM alerts expose more breaking news events sooner than human journalists could track on their own through manual monitoring of social media.
Since 2014, we have automated text stories from structured sets of data using natural language generation (NLG). We began with corporate earnings stories for all publicly traded companies in the United States, increasing our output by a factor of 10 and increasing the liquidity of the companies we covered. We have since applied similar technology to over a dozen sports previews and game recaps globally.
Software developed by Trint and employing machine learning is enabling us to transcribe videos in real time, slashing the time previously spent creating transcripts for broadcast video. We are now working to marry this technology with live video streams and also integrate automatic translation to multiple languages.
AP story summaries
We are exploring how summarization technology can help us automatically generate different versions of text stories to serve a variety of digital uses. Our current project creates short summaries of longer text articles and delivers them to editors for review, streamlining a process that was previously all manual.
Image recognition software can improve the keywords on AP photos, including the millions of photos in our archive, and improve our system for finding and recommending images to editors. We have tested whether these tools can help keep graphic content out of our image feeds or help identify athletes by jersey numbers. This will create the first editorially-defined taxonomy for the news industry.
We're applying computer vision technology from Vidrovr to videos to identify major political and celebrity figures and to accurately time-stamp soundbites. This is helping us streamline the previous process of manually examining our video news feeds to create text “shotlists” for our customers to use as a guide to the content of our news video.