Cyber Signals Issue 9 | AI-powered deception: Emerging fraud threats and countermeasures | Microsoft Security Blog

Compatibilità
Salva(0)
Condividi

Introduction | Security snapshot | Threat briefing
Defending against attacks | Expert profile 

Microsoft maintains a continuous effort to protect its platforms and customers from fraud and abuse. From blocking imposters on Microsoft Azure and adding anti-scam features to Microsoft Edge, to fighting tech support fraud with new features in Windows Quick Assist, this edition of Cyber Signals takes you inside the work underway and important milestones achieved that protect customers.

We are all defenders. 

Between April 2024 and April 2025, Microsoft:

  • Thwarted $4 billion in fraud attempts.
  • Rejected 49,000 fraudulent partnership enrollments.
  • Blocked about 1.6 million bot signup attempts per hour.

The evolution of AI-enhanced cyber scams

AI has started to lower the technical bar for fraud and cybercrime actors looking for their own productivity tools, making it easier and cheaper to generate believable content for cyberattacks at an increasingly rapid rate. AI software used in fraud attempts runs the gamut, from legitimate apps misused for malicious purposes to more fraud-oriented tools used by bad actors in the cybercrime underground.

AI tools can scan and scrape the web for company information, helping cyberattackers build detailed profiles of employees or other targets to create highly convincing social engineering lures. In some cases, bad actors are luring victims into increasingly complex fraud schemes using fake AI-enhanced product reviews and AI-generated storefronts, where scammers create entire websites and e-commerce brands, complete with fake business histories and customer testimonials. By using deepfakes, voice cloning, phishing emails, and authentic-looking fake websites, threat actors seek to appear legitimate at wider scale.

According to the Microsoft Anti-Fraud Team, AI-powered fraud attacks are happening globally, with much of the activity coming from China and Europe, specifically Germany due in part to Germany’s status as one of the largest e-commerce and online services markets in the European Union (EU). The larger a digital marketplace in any region, the more likely a proportional degree of attempted fraud will take place.

E-commerce fraud

Fraudulent e-commerce websites can be set up in minutes using AI and other tools requiring minimal technical knowledge. Previously, it would take threat actors days or weeks to stand up convincing websites. These fraudulent websites often mimic legitimate sites, making it challenging for consumers to identify them as fake. 

Using AI-generated product descriptions, images, and customer reviews, customers are duped into believing they are interacting with a genuine merchant, exploiting consumer trust in familiar brands.

AI-powered customer service chatbots add another layer of deception by convincingly interacting with customers. These bots can delay chargebacks by stalling customers with scripted excuses and manipulating complaints with AI-generated responses that make scam sites appear professional.

In a multipronged approach, Microsoft has implemented robust defenses across our products and services to protect customers from AI-powered fraud. Microsoft Defender for Cloud provides comprehensive threat protection for Azure resources, including vulnerability assessments and threat detection for virtual machines, container images, and endpoints.

Microsoft Edge features website typo protection and domain impersonation protection using deep learning technology to help users avoid fraudulent websites. Edge has also implemented a machine learning-based Scareware Blocker to identify and block potential scam pages and deceptive pop-up screens with alarming warnings claiming a computer has been compromised. These attacks try to frighten users into calling fraudulent support numbers or downloading harmful software.

Job and employment fraud

The rapid advancement of generative AI has made it easier for scammers to create fake listings on various job platforms. They generate fake profiles with stolen credentials, fake job postings with auto-generated descriptions, and AI-powered email campaigns to phish job seekers. AI-powered interviews and automated emails enhance the credibility of job scams, making it harder for job seekers to identify fraudulent offers.

To prevent this, job platforms should introduce multifactor authentication for employer accounts to make it harder for bad actors to take over legitimate hirers’ listings and use available fraud-detection technologies to catch suspicious content.

Fraudsters often ask for personal information, such as resumes or even bank account details, under the guise of verifying the applicant’s information. Unsolicited text and email messages offering employment opportunities that promise high pay for minimal qualifications are typically an indicator of fraud.

Employment offers that include requests for payment, offers that seem too good to be true, unsolicited offers or interview requests over text message, and a lack of formal communication platforms can all be indicators of fraud.

Tech support scams

Tech support scams are a type of fraud where scammers trick victims into unnecessary technical support services to fix a device or software problems that don’t exist. The scammers may then gain remote access to a computer—which lets them access all information stored on it, and on any network connected to it or install malware that gives them access to the computer and sensitive data.

Tech support scams are a case where elevated fraud risks exist, even if AI does not play a role. For example, in mid-April 2024, Microsoft Threat Intelligence observed the financially motivated and ransomware-focused cybercriminal group Storm-1811 abusing Windows Quick Assist software by posing as IT support. Microsoft did not observe AI used in these attacks; Storm-1811 instead impersonated legitimate organizations through voice phishing (vishing) as a form of social engineering, convincing victims to grant them device access through Quick Assist. 

Quick Assist is a tool that enables users to share their Windows or macOS device with another person over a remote connection. Tech support scammers often pretend to be legitimate IT support from well-known companies and use social engineering tactics to gain the trust of their targets. They then attempt to employ tools like Quick Assist to connect to the target’s device. 

Quick Assist and Microsoft are not compromised in these cyberattack scenarios; however, the abuse of legitimate software presents risk Microsoft is focused on mitigating. Informed by Microsoft’s understanding of evolving cyberattack techniques, the company’s anti-fraud and product teams work closely together to improve transparency for users and enhance fraud detection techniques. 

The Storm-1811 cyberattacks highlight the capability of social engineering to circumvent security defenses. Social engineering involves collecting relevant information about targeted victims and arranging it into credible lures delivered through phone, email, text, or other mediums. Various AI tools can quickly find, organize, and generate information, thus acting as productivity tools for cyberattackers. Although AI is a new development, enduring measures to counter social engineering attacks remain highly effective. These include increasing employee awareness of legitimate helpdesk contact and support procedures, and applying Zero Trust principles to enforce least privilege across employee accounts and devices, thereby limiting the impact of any compromised assets while they are being addressed. 

Microsoft has taken action to mitigate attacks by Storm-1811 and other groups by suspending identified accounts and tenants associated with inauthentic behavior. If you receive an unsolicited tech support offer, it is likely a scam. Always reach out to trusted sources for tech support. If scammers claim to be from Microsoft, we encourage you to report it directly to us at https://www.microsoft.com/reportascam

Building on the Secure Future Initiative (SFI), Microsoft is taking a proactive approach to ensuring our products and services are “Fraud-resistant by Design.” In January 2025, a new fraud prevention policy was introduced: Microsoft product teams must now perform fraud prevention assessments and implement fraud controls as part of their design process. 

Recommendations

  • Strengthen employer authentication: Fraudsters often hijack legitimate company profiles or create fake recruiters to deceive job seekers. To prevent this, job platforms should introduce multifactor authentication and Verified ID as part of Microsoft Entra ID for employer accounts, making it harder for unauthorized users to gain control.
  • Monitor for AI-based recruitment scams: Companies should deploy deepfake detection algorithms to identify AI-generated interviews where facial expressions and speech patterns may not align naturally.
  • Be cautious of websites and job listings that seem too good to be true: Verify the legitimacy of websites by checking for secure connections (https) and using tools like Microsoft Edge’s typo protection.
  • Avoid providing personal information or payment details to unverified sources: Look for red flags in job listings, such as requests for payment or communication through informal platforms like text messages, WhatsApp, nonbusiness Gmail accounts, or requests to contact someone on a personal device for more information.

Using Microsoft’s security signal to combat fraud

Microsoft is actively working to stop fraud attempts using AI and other technologies by evolving large-scale detection models based on AI, such as machine learning, to play defense by learning from and mitigating fraud attempts. Machine learning is the process that helps a computer learn without direct instruction using algorithms to discover patterns in large datasets. Those patterns are then used to create a comprehensive AI model, allowing for predictions with high accuracy.

We have developed in-product safety controls that warn users about potential malicious activity and integrate rapid detection and prevention of new types of attacks.

Our fraud team has developed domain impersonation protection using deep-learning technology at the domain creation stage, to help protect against fraudulent e-commerce websites and fake job listings. Microsoft Edge has incorporated website typo protection, and we have developed AI-powered fake job detection systems for LinkedIn.

Microsoft Defender Smartscreen is a cloud-based security feature that aims to prevent unsafe browsing habits by analyzing websites, files, and applications based on their reputation and behavior. It is integrated into Windows and the Edge browser to help protect users from phishing attacks, malicious websites, and potentially harmful downloads.

Furthermore, Microsoft’s Digital Crimes Unit (DCU) partners with others in the private and public sector to disrupt the malicious infrastructure used by criminals perpetuating cyber-enabled fraud. The team’s longstanding collaboration with law enforcement around the world to respond to tech support fraud has resulted in hundreds of arrests and increasingly severe prison sentences worldwide. The DCU is applying key learnings from past actions to disrupt those who

Recapiti
stclarke