Welcome to Nural's newsletter focusing on how AI is being used to tackle global grand challenges.
Packed inside we have
- Meta launches Sphere, an AI knowledge tool based on open web content, used initially to verify citations on Wikipedia
- From AI compliance to competitive advantage: The rewards of Responsible AI
- and capAI - A Procedure for Conducting Conformity Assessment of AI Systems in Line with the EU Artificial Intelligence Act
If you would like to support our continued work from £1 then click here!
Marcel Hedman
Key Recent Developments
Meta launches Sphere, an AI knowledge tool based on open web content, used initially to verify citations on Wikipedia
What: Meta announced a new tool called Sphere, AI built around the concept of tapping the vast repository of information on the open web to provide a knowledge base for AI and other systems to work. Sphere’s first application, Meta says, is Wikipedia, where it’s being used in a production phase (not live entries) to automatically scan entries and identify when citations in its entries are strongly or weakly supported.
Key Takeaway: Wikipedia is relied on as a source of knowledge by users of all seniorities and professions. This makes ensuring the validity of the information a matter of extreme importance. However, the task is increasingly becoming impossible for the small team of editors and having the muscle of AI to support will greatly impact workflows and Wikipedia quality levels if successful.
Report: From AI compliance to competitive advantage: The rewards of Responsible AI
Overview
- As companies deploy AI for a growing range of tasks, adhering to laws, regulations and ethical standards will be critical to building a sound AI foundation.
- 80% of companies plan to increase investment in Responsible AI, and 77% see regulation of AI as a priority.
- Most companies (69%) have started implementing Responsible AI practices, but only 6% have operationalized their capabilities to be responsible by design.
Key Takeaway:
The report highlights that firms implementing AI are becoming increasingly aware of impending regulation of its use. Regulation such as the AU AI act (see below) which will govern which applications of AI are permitted. It's seen that those firms which are "responsible by design" have seen increased performance. Where "responsible by design" means they apply a responsible data and AI approach across the complete lifecycle of all their models
capAI - A Procedure for Conducting Conformity Assessment of AI Systems in Line with the EU Artificial Intelligence Act
What: Experts at the Universities of Oxford and Bologna have recently launched capAI, an assessment procedure for AI systems which aims to assess them in line with the EU’s proposed Artificial Intelligence Act (unveiled in 2021). Given both the potential of AI across businesses and wider society, and some of the known risks when using it, the developers behind capAI argue that ‘proactively assessing AI systems can prevent harm by avoiding, for example, privacy violations, discrimination, and liability issues, and in turn, prevent reputational and financial harm from organisations that operate AI systems.’
Key Takeaway: Proactive prevention is always better than reactive punishment. The EU made a big wave with it's AI act proposal in 2021 and while it has not formally come into action, it is good to see a framework being introduced to enable practitioners to build in line with the governing rules of the AI act.
See this newsletter to keep up to date with the AI act: https://artificialintelligenceact.eu/
AI 4 good & ethical considerations
🚀 The Francis Crick Institute and DeepMind join forces to apply machine learning to biology
🚀 The EU: Artificial Intelligence Act
Other interesting reads
🚀 FIFA will track players’ bodies using AI to make offside calls at 2022 World Cup
🚀 Europeans could be cut off from Facebook and Instagram as soon as September—and TikTok may be next on the block
🚀 Designing Arithmetic Circuits with Deep Reinforcement Learning
🚀 Angela Fan explains NLLB-200: High-quality Machine Translation for Low-Resource Languages (Video)
Cool companies found this week
AI-powered design
Celus - SUpport innovations in electronics design. The AI-powered CELUS Engineering Platform generates schematics, PCB Floorplan, and BOMs compatible with the industry’s EDA Tools
Climate
Measurabl - Measurabl is a widely adopted ESG (environmental, social, governance) data management solution for commercial real estate. Measurabl helps the companies in the industry optimize their ESG performance, assess exposure to physical climate risk, and act on decarbonization and sustainable finance opportunities.
and finally...
Sphere, Meta's AI tool for Wikipedia in action
AI/ML must knows
Foundation Models - any model trained on broad data at scale that can be fine-tuned to a wide range of downstream tasks. Examples include BERT and GPT-3. (See also Transfer Learning)
Few shot learning - Supervised learning using only a small dataset to master the task.
Transfer Learning - Reusing parts or all of a model designed for one task on a new task with the aim of reducing training time and improving performance.
Generative adversarial network - Generative models that create new data instances that resemble your training data. They can be used to generate fake images.
Deep Learning - Deep learning is a form of machine learning based on artificial neural networks.
Best,
Marcel Hedman
Nural Research Founder
www.nural.cc
If this has been interesting, share it with a friend who will find it equally valuable. If you are not already a subscriber, then subscribe here.
If you are enjoying this content and would like to support the work financially then you can amend your plan here from £1/month!