Mitigate Cybersecurity Risk of AI and ML Advanced Capabilities
The continuous advancement of Artificial Intelligence (AI) and Machine Learning (ML) capabilities doesn’t just benefit good guys; it is likely to be leveraged by sophisticated cyber criminals for malicious purposes. Threat actors can leverage their ability to target social engineering with precision, impersonate with deepfakes and abundant personal information revealed from social media, coordinate autonomous cyberattacks through botnets, and create malware, ransomware, and APTs (Advanced Persistent Threats) that are hard to detect with legacy information security technology. Additionally, internal threats arise when employees may be involuntarily disclosing design source codes, trade secrets, and customer confidential information through the use of LLMs (Large Language Models) like ChatGPT or DeepSeek.
Fortunately, there may be effective mitigations. This presentation aims to address cybersecurity threats instigated or derived from the misuse of AI/ML capabilities and suggests mitigations through a well thought out System Security Plan (SSP) with adequate security controls.
Date & Time July 1, 2025 (US Pacific Time) 1:15 PM
Location Samsung Semiconductor (3655 N 1st St, San Jose, CA 95134 United States)