Warning: ini_set(): Session ini settings cannot be changed when a session is active in /home/u319666691/domains/cybersecurityboard.com/public_html/index.php on line 12

Warning: Trying to access array offset on value of type null in /home/u319666691/domains/cybersecurityboard.com/public_html/index.php on line 1071

Warning: Trying to access array offset on value of type null in /home/u319666691/domains/cybersecurityboard.com/public_html/index.php on line 1072

Warning: Undefined array key 1 in /home/u319666691/domains/cybersecurityboard.com/public_html/index.php on line 1105

Warning: Undefined array key 2 in /home/u319666691/domains/cybersecurityboard.com/public_html/index.php on line 1105

Warning: Undefined array key 3 in /home/u319666691/domains/cybersecurityboard.com/public_html/index.php on line 1105

Warning: Undefined array key 4 in /home/u319666691/domains/cybersecurityboard.com/public_html/index.php on line 1105

Warning: Undefined array key 5 in /home/u319666691/domains/cybersecurityboard.com/public_html/index.php on line 1105

Warning: Undefined variable $link_subfolder1 in /home/u319666691/domains/cybersecurityboard.com/public_html/index.php on line 1134

Warning: Undefined variable $meta_article in /home/u319666691/domains/cybersecurityboard.com/public_html/_template.php on line 100

Warning: Undefined variable $meta_og in /home/u319666691/domains/cybersecurityboard.com/public_html/_template.php on line 100

Warning: Undefined variable $meta_twitter in /home/u319666691/domains/cybersecurityboard.com/public_html/_template.php on line 100

Warning: Undefined variable $login_loggedon_html in /home/u319666691/domains/cybersecurityboard.com/public_html/_template.php on line 142
Meta's Purple Llama wants to test safety risks in AI models | CyberSecurityBoard

Warning: Undefined variable $comments_html in /home/u319666691/domains/cybersecurityboard.com/public_html/_template.php on line 527

Meta's Purple Llama wants to test safety risks in AI models

Generative Artificial Intelligence models have been around for years and their main function, compared to older AI models is that they can process more types of input.
Take for example the older models that were used to determine whether a file was malware or not.
Generative AI models are capable of sorting through more types of information.
Take for example Large Language Models that can process text, images, videos, songs, diagrams, webinars, computer code, and other similar types of input.
Generative AI models are the closest to human creativity we can get at this point in time.
Generative AI has brought about a new wave of innovations.
It enables us to have a conversation with models like ChatGPT, create images based on instructions, and summarize large amounts of text(s).
For this reason, Meta is collaborating in project Purple Llama with other AI application developers like Microsoft and cloud platforms like AWS and Google Cloud, and chip designers like Intel, AMD, and Nvidia.
LLMs can generate code, that fails to follow security best practices or introduce exploitable vulnerabilities.
Given that GitHub recently proudly touted that 46% of code is produced with the help of their CoPilot AI, this is far from just a theoretical risk.
It makes sense that the first step in project Purple Llama focuses on tools to test cyber security issues in software-generating models.
This package allows developers to run benchmark tests to check how likely it is for an AI model to generate insecure code or assist users in carrying out cyberattacks.
The package is introduced under the name CyberSecEval, a comprehensive benchmark developed to help bolster the cybersecurity of LLMs employed as coding assistants.
Initial tests showed that on average, LLMs suggested vulnerable code 30% of the time.
To check and filter all the inputs and outputs of a LLM, Meta released Llama Guard.
Llama Guard is a freely available model that provides developers with a pretrained model to help defend against generating potentially risky outputs.
The model has been trained on a mix of publicly available datasets to help find common types of potentially risky, or violating content.
This will allow developers to filter out specific items that might cause a model to produce inappropriate content.
Memory safety vulnerabilities are a class of well-known and common coding errors that cybercriminals exploit quite often.
Coding errors that could increase in numbers unless we take steps toward using memory safe programming languages and use the methods to check the code generated by LLMs. We don't just report on threats-we remove them.


This Cyber News was published on www.malwarebytes.com. Publication date: Fri, 08 Dec 2023 18:43:04 +0000


Cyber News related to Meta's Purple Llama wants to test safety risks in AI models


Fatal error: Uncaught mysqli_sql_exception: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 's Purple Llama wants to test safety risks in AI models') AS score FROM TPL_ar...' at line 1 in /home/u319666691/domains/cybersecurityboard.com/public_html/_template.php:336 Stack trace: #0 /home/u319666691/domains/cybersecurityboard.com/public_html/_template.php(336): mysqli_query() #1 /home/u319666691/domains/cybersecurityboard.com/public_html/_template.php(548): template_block() #2 /home/u319666691/domains/cybersecurityboard.com/public_html/_template.php(531): template_related() #3 /home/u319666691/domains/cybersecurityboard.com/public_html/index.php(1135): template_content() #4 {main} thrown in /home/u319666691/domains/cybersecurityboard.com/public_html/_template.php on line 336