As troubling as deepfakes and large language model (LLM)-powered phishing are to the state of cybersecurity today, the truth is that the buzz around these risks may be overshadowing some of the bigger ...
Prompt injection and supply chain vulnerabilities remain the main LLM vulnerabilities but as the technology evolves new risks come to light including system prompt leakage and misinformation.
As businesses move from trying out generative AI in limited prototypes to putting them into production, they are becoming increasingly price conscious. Using large language models (LLMs) isn’t cheap, ...
When talking with a chatbot, you might inevitably give up your personal information—your name, for instance, and maybe details about where you live and work, or your interests. The more you share with ...
New tools for filtering malicious prompts, detecting ungrounded outputs, and evaluating the safety of models will make generative AI safer to use. Both extremely promising and extremely risky, ...
As AI takes hold in the enterprise, Microsoft is educating developers with guidance for more complex use cases in order to get the best out of advanced, generative machine language models like those ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results