AI Newsletter #100 - Adversarial prompting exposes API-integrated LLM vulnerabilities
Welcome to Nural's newsletter focusing on how AI is being used to tackle global grand challenges.
Packed inside we have
- Meta open sources LLaMA - 10x smaller than GPT-3 with improved performance
- Adversarial prompting exposes API-integrated LLM vulnerabilities
- and ChatGPT allowed to be used in International Baccalaureate student's essays
If you would like to support our continued work from £1 then click here!
Marcel Hedman
Key Recent Developments
ChatGPT allowed in International Baccalaureate essays
What: The international Baccalaureate (IB), has allowed Schoolchildren to quote from content created by ChatGPT in their essays.
Matt Glanville, the IB’s head of assessment principles and practice, said "The clear line between using ChatGPT and providing original work is exactly the same as using ideas taken from other people or the internet. As with any quote or material adapted from another source, it must be credited in the body of the text and appropriately referenced in the bibliography"
He added essay writing would feature less prominently in the qualifications process in the future because of the rise of chatbot technology
Key Takeaway: Since the release of ChatGPT, there has been significant concern on the tools ability to faciliate cheating by students. Given the tool's power to create fully formed essays. While some see the International Baccalaureate's decision to encourage the tool's use as a progressive step in adapting to technological advancements, others worry about the lack of standardization in the assessment process and the potential long-term impact on the evaluation of students.
ChatGPT as a marketing tool - the non obvious
What: OpenCage, an API for geocoding, recently found it had an influx of users who were quickly unhappy and complaining. The source... ChatGPT, which had recommended it to users that the service "offer an API to turn a mobile phone number into the location of the phone"
They do not!
Key takeaway: This is a case where the training data held incorrrect information and began blasting it as truth at scale. However, beyond the obvious questions around how to mitigate factual errors (& hallucinations) in LLMs with millions of users. Another interesting concept is the idea of ChatGPT as a marketing engine based on what's contained in its training data. Will we see a world where companies pay for certain "facts" to be included and overweighted in a training corpus?
EU's AI Act faces delay with lawmakers deadlocked after crunch meeting
AI Ethics
🚀 Facebook "opensource" LLaMA LLM
🚀 Novel adversarial prompting techniques
- Additional prompt engineering guide
🚀 Responsible practices with synthetic media framework
🚀 Elon Musk and Tesla face a fresh lawsuit alleging his self-driving tech is a fraud
🚀 OpenAI founder blog: "Planning for AGI"
- Here's a critique
Other interesting reads
🚀 BioGPT: Generative Pre-trained Transformer for Biomedical Text Generation and Mining [Paper]
🚀 Snapchat is releasing its own AI chatbot powered by ChatGPT
🚀 OpenAI launches an API for ChatGPT, plus dedicated capacity for enterprise customers
🚀 "Aligning Text-to-Image Models using Human Feedback"
Cool companies found this week
Data privacy
Metomic - Protect sensitive data in SaaS applications through detection of sensitive info, remediation of critical issues and employee real time coaching. Recently raised $20m
Talent
HireLogic - AI-based candidate insights by Listening to any Job Interview
Developers
Nebuly - Plug & play open-source AI modules
Best,
Marcel Hedman
Nural Research Founder
www.nural.cc
If this has been interesting, share it with a friend who will find it equally valuable. If you are not already a subscriber, then subscribe here.
If you are enjoying this content and would like to support the work financially then you can amend your plan here from £1/month!