woman on phone

Mastercard Builds AI Model for Fraud Detection

Company says transaction-trained system could improve accuracy and reduce false positives.

NEW YORK — Mastercard is developing a new artificial intelligence model trained on transaction data to improve fraud detection and support decision-making across its payments network, the company announced Tuesday.

The model, described in a company post by Distinguished Engineer Steve Flinter, is designed as an “insights engine” rather than a chatbot, using structured payments data to predict transaction behavior and improve tools across cybersecurity, loyalty and analytics: .

“We plan to build hybrid cybersecurity systems that combine the best of both our current AI models and this new LTM,” Flinter wrote, adding that the approach is intended to “futureproof our cyber defenses.”

Unlike large language models used in systems such as OpenAI’s ChatGPT, Mastercard’s system is a “large tabular model” trained on structured datasets, including transaction, fraud and chargeback data. The company said it removes personal information before training and is working with Nvidia and Databricks to scale the effort.

Early testing shows the model can reduce false positives in fraud detection—such as flagging legitimate high-value purchases—by identifying patterns that traditional models often miss, according to the company.

The company described the system as a large tabular model built on structured data such as transactions, fraud patterns and chargebacks. Mastercard said the model is intended to work as an insights engine rather than a chatbot and that early testing shows it can better distinguish legitimate purchases from suspicious activity.

The project highlights how financial firms are moving beyond consumer-facing AI assistants and applying advanced models to core operational systems. In cybersecurity terms, that means fraud detection, identity checks and transaction monitoring are becoming more tightly linked. It also raises familiar questions about governance, explainability and how centralized AI systems might perform under adversarial pressure as they take on bigger roles in financial decision-making.

Total
0
Shares

Leave a Reply

Previous Article

GoTo Expands MSP Push With New Partner Program

Next Article
filing tax return online

Critical Flaw in jsPDF Code Puts Millions at Risk

Related Posts

Discover more from Security Point Break

Subscribe now to keep reading and get access to the full archive.

Continue reading