this post was submitted on 19 Feb 2024
76 points (96.3% liked)

Technology

59086 readers
3496 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Summary

This research, conducted by Microsoft and OpenAI, focuses on how nation-state actors and cybercriminals are using large language models (LLMs) in their attacks.

Key findings:

  • Threat actors are exploring LLMs for various tasks: gathering intelligence, developing tools, creating phishing emails, evading detection, and social engineering.
  • No major attacks using LLMs were observed: However, early-stage attempts suggest potential future threats.
  • Several nation-state actors were identified using LLMs: Including Russia, North Korea, Iran, and China.
  • Microsoft and OpenAI are taking action: Disabling accounts associated with malicious activity and improving LLM safeguards.

Specific examples:

  • Russia (Forest Blizzard): Used LLMs to research satellite and radar technologies, and for basic scripting tasks.
  • North Korea (Emerald Sleet): Used LLMs for research on experts and think tanks related to North Korea, phishing email content, and understanding vulnerabilities.
  • Iran (Crimson Sandstorm): Used LLMs for social engineering emails, code snippets, and evading detection techniques.
  • China (Charcoal Typhoon): Used LLMs for tool development, scripting, social engineering, and understanding cybersecurity tools.
  • China (Salmon Typhoon): Used LLMs for exploratory information gathering on various topics, including intelligence agencies, individuals, and cybersecurity matters.

Additional points:

  • The research identified eight LLM-themed TTPs (Tactics, Techniques, and Procedures) for the MITRE ATT&CK® framework to track malicious LLM use.
you are viewing a single comment's thread
view the rest of the comments
[–] AbouBenAdhem@lemmy.world 10 points 8 months ago* (last edited 8 months ago) (1 children)

I assume they mean threat actors besides Microsoft and OpenAI?