If you have seen any of my talks, I often say that the infosec industry wouldn't exist without deception.
Although I've seen enough nature documentaries to know deception exists throughout the rest of the animal kingdom, humans have the cunning ability to deceive each other to gain resources, whether in war or crime.
Of course, it has evolved with the evolution of technology into the world of cybercrime - the use of artificial intelligence is no different.
At Black Hat and Def Con this year, I saw an interesting dichotomy in the realm of AI, specifically the application of data science and machine learning in defensive and offensive security.
Machine learning models are only as good as the data they are fed.
As any data scientist will tell you, the majority of their job is data prep and cleansing, this also makes these models themselves susceptible to deception through data poisoning and model manipulation.
The application of LLM through tools such as ChatGPT has been a fantastic breakthrough in the application of data science, with the promise of increasing productivity across many different industries.
LLM is a machine learning model that uses Natural Language Processing to scan massive amounts of text.
Some companies have been deceptive about how this technology works, confusing the industry.
Although LLM technology can magically create content from a prompt out of thin air, there is more to it than meets the eye.
LLMs rely on data inputs like any other model, so they leverage existing works, whether articles, blog posts, art, or even code.
Interestingly, we can be deceived by this technology by accident; however, the same technology can and is being used offensively to manipulate data models and people and, in many respects, is ahead of the defense.
The increasingly widespread use of this technology will pose a significant threat to organizations and individuals, mainly as many non-tech-savvy folks are unaware of it, and the models become increasingly convincing.
The use of generative AI to create videos and images that are progressively realistic is already finding its way into propaganda, fraud, and social engineering at a horrifying rate, and most security awareness training programs and other defenses for these types of attacks are slow to catch up.
In creating AI tools to make us more productive and creative, we also opened a Pandora's Box, as these same tools can be used to deceive us.
Organizations also need to consider the potential liability of using some of these tools, given the technology is new, questions about data provenance, and potential legislation regarding their use.
By keeping the human in the center, we are better able to harness the power of AI while at the same time ensuring it has the proper inputs and monitoring of its outputs.
Trained humans are still better than machines at identifying patterns and detecting human deception; the challenge is that they are overwhelmed with data, tooling, and threats.
The more we can leverage AI to enhance the analysts' capabilities to make their jobs easier, the better we will defend against a whole new generation of threats - or maybe this post was written by an AI to convince you that's the case ;-). About the Author.
He has been in the cybersecurity field for over 15 years working with companies to improve their security posture, through detection engineering, threat hunting, insider threat programs, and vulnerability research.
This Cyber News was published on www.cyberdefensemagazine.com. Publication date: Fri, 05 Jan 2024 06:13:06 +0000