AI In Brief: OpenAI is introducing updates to GPT-4 that will allow the AI model to answer questions about submitted images. The company has taken precautions to protect user privacy and prevent inappropriate outputs, such as blocking the model from recognizing faces or locations and refraining from commenting on people’s appearances. The new functionality, called GPT-4V, has limitations in identifying information from images and is not suitable for tasks like identifying illegal drugs. OpenAI also highlighted the potential for GPT-4V to generate false information and spread disinformation at scale. Additionally, OpenAI is adding voice input support for ChatGPT Plus users. Meanwhile, French AI startup Mistral has released an unmoderated and uncensored language model that outperforms some competitors. Meta has expanded the input prompt length for its Llama 2 models, allowing them to process more data and perform complex tasks. However, Meta’s release of the models’ weights has faced backlash due to concerns about safety. Amazon executive Dave Limp mentioned that conversations with Alexa may be used to train Amazon’s large language model. The US Department of Energy’s Oak Ridge National Laboratory has launched the Center for AI Security Research to study adversarial attacks on machine learning systems. The lab aims to inform federal agencies about existing software and capabilities to mitigate AI risks. AWS has made its Bedrock platform, which hosts foundation models for enterprise use, available. Various companies are utilizing Bedrock’s generative AI services, including Adidas, BMW Group, LexisNexis Legal & Professional, and PGA Tour.