#38 State of AI Education
What the AI Act will mean for AI Education?
This week, the AI Act was leaked and it offers some insights into the implications for AI literacy.
A term that is used fairly often in the Act is AI literacy. According to document, it “refers to skills, knowledge and understanding that allows providers, users and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause” (44bh, p. 92). The definition is broad, but it shows that AI competence is a phenomenon in which experts link AI knowledge to their role and tasks in their respective company, but also to the context in which the AI system is used.
To what extent, providers and users of AI need to be knowledgable about AI also depends on the risk classification of the use case. For high-risk systems, “Providers should ensure that all documentation, including the instructions for use, contains meaningful, comprehensive, accessible and understandable information, taking into account the needs and foreseeable knowledge of the target deployers. Instructions for use should be made available in a language which can be easily understood by target deployers, as determined by the Member State concerned.” (p.38). This creates an incentive for educational developers to create templates that help providers to define the knowledge required for the use of AI systems. At the very least, it forces providers to write a comprehensible description of AI systems and their possible uses.
When it comes to low- or medium-risk systems, the AI Act proposes voluntary AI literacy training: “Providers and, as appropriate, deployers of all AI systems high-risk or not, and models should also be encouraged to apply on a voluntary basis additional requirements related, for example, to […] AI literacy measures” (p. 77) Or: “Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on which the AI systems are to be used.” (Article 4b, p. 95). The voluntary nature of promoting AI skills is further emphasized later in the document: “the Commission and the Member States should facilitate the drawing up of voluntary codes of conduct to advance AI literacy among personas dealing with the development, operation, and the use of AI.” (9b, p. 10). Although this proposal is voluntary, it will encourage companies to develop broad training programs for AI.
These are not earth-shattering findings, but they clearly show that AI literacy is on the agenda of the AI Act. The real incentive to invest in AI literacy in companies seems to lie in high-risk use cases.
What we can learn about GenAI from the printing press
When it comes to technology and cultural change, the tail wags the dog. When Gutenberg made the production of books more productive with the printing press, he lowered the barrier to entry for people to publish books. As a result, cities that adopted the printing press earlier produced more famous scientists and artists than cities adopting the technology later. Not only did we see a first-mover advantage, but the printing press also biased the success of content tailored to the medium. While the popularity of scientists and artists increased, the popularity of politicians and religious figures who could not use the technology to their own advantage decreased. Similarly, the advent of television created an incentive to popularise sports stars and actors. Both technologies, printing and television, made it possible to spread existing content to more people.
With GenAI, it is the reverse. The mediums to distribute content have become established. We have the Internet, YouTube, email, and e-books. What is changing is the fact that people can produce immeasurably more content, even if they are beginners in a field. With ChatGPT, you can easily draft a complex theater script with twists and turns. You could write a show for a TV series in an hour. This time, we have lowered the barrier to entry for cognitive work. Undoubtedly, people will take advantage of this opportunity. We are seeing early signs of this behaviour. Publishers like Neil Clark complain about the flood of content they receive from people who care less about the craft of writing rather than the hope of making a buck. A likely counter-movement against the easy production of cognitive work might be an emphasis on craft that is produced without cognitive scaffolds. Podcasts, for example, force people to speak their mind on the fly without processing their sketchy thoughts through Large Language Models. Writers, who make it believable that they do not rely on LLMs to fine-tune their writing, will be highly regarded.
The state of Machine Learning Engineers, Data Scientists, and Data Engineers in Europe
Here is a thought experiment. In 2021, there were around 20,800 large companies in Germany (with more than 249 employees). If all ML engineers from Europe who are responsible for deploying models to production were to work in these companies, how many of them would each company get? Two thirds! But that's all the talent in Europe and that's just the big companies; we have completely ignored the medium-sized companies. The majority of ML Engineers from Germany come from Berlin and Munich, which account for about 1000. That is not a promising statistic either. No matter how you work out the thought experiment, we have far fewer ML engineers than we need to deploy ML models on a large scale.
The figures come from a recent study by Understanding Recruitment, a recruitment agency based in the UK. The skills shortage is just one of many problems facing educational institutions and companies. Still 82% of ML engineers are male, they are concentrated in a few major cities and seem dissatisfied with their pay. 45% of the 282 professionals surveyed reported being underpaid; a staggering statistic considering that Senior ML Engineers earn an average of $83,442 and Head of ML Engineers $134,322 in Europe. In comparison, the median salary in Germany is €49,260.
The survey covers two additional roles: Data Scientists and Data Engineers. Compared to ML engineers, there are many more professionals in these roles in Europe. The study found 72,000 Data Scientists and 62,000 Data Engineers in Europe. The gender gap is slightly smaller than for ML engineers, but not profound: 71% men and 77% women. One striking difference is the different degree programmes from which these roles are recruited. Data Scientists and ML Engineers are more likely to have a background in maths and physics than Data Engineers. In addition, Data Engineers are more likely to have a Bachelor's degree (~30%) compared to Data Scientists (~12%) and ML Engineers (~12%).
Draw your own conclusions from these figures. My main conclusion from them is that we need to invest in building ML engineering skills to help organisations deliver their models. No model is worth anything if it is not deployed.
Job automation through AI is not yet financially interesting for many companies
It is not enough to have a technology at your disposal to be able to use it. We know this from the study of learning strategies, where learners often show what is called a utilisation deficit, where they don't apply learning strategies because they require too much mental energy and take time to take effect. In the field of AI, most organisations do not jump into AI straight away and experience a certain inertia when it comes to implementing or purchasing AI applications. Despite a lack of skills and a culture that values data-driven decision making, AI is not economically feasible for many organisations. This hypothesis is supported by recent data from Maja S. Svanberg and colleagues. They have modelled the likelihood of companies using AI systems for computer vision tasks. Their results indicate that only 23% of tasks are attractive for automation. For companies with 5,000 employees, only a tenth of these are attractive given the current cost structure. According to the authors, there will be a massive shift in jobs, but it will be much slower, leaving enough time to retrain employees.Reading this, it seems likely that larger technology companies will take over the development of AI vision systems and offer them on a large scale as software-as-a-service.
A list of the top talents in AI
If you're curious about who is actually leading the development of AI, take a look at the Time 100 / AI list. This list was published in October 2023 and includes people from all ages and backgrounds. It includes, for example, 18-year-old Sneha Revanur, who founded Encode Justice, a group of young people pushing human-centered AI. Or Holly Herndon, who works at the intersection of music and AI. The list is divided into the categories leaders, innovators, shapers, and thinkers. Have a read. Highly recommended.