#40 State of AI Education
So how important is writing in the age of GenAI?
In a recent study by Henner Gimpel (LinkedIn post) and many other colleagues, the question of how AI will affect human skills was discussed in detail. In the study, they focused on GenAI tools. The text was interesting to read. One claim that took my attention was that GenAI will make core skills such as writing, active listening, programming or reading less relevant. For example, the authors report the following: “Until now, it has generally been very difficult to go through life as an illiterate person. With increasingly better tools to translate written text into spoken language and vice versa, the ability to read and write might become less relevant. General literacy, which does not include information and data literacy as digital competencies, loses relevance for digital content automatically checked, corrected, and improved” (p. 17). This is a bold claim to which a few things need to be said.
First, this is not a question of whether writing should be replaced or not. The writing process is complex, and the use of large language models can affect it in many ways. For one, we can simply copy the text without reviewing it, or we can use the output of the model to learn from the differences between our text and the generated text. How we use the tool is crucial. Second, cognitive tools such as GenAI tools do not necessarily lead to a reduction in the relevance of a skill. Mathematics is the best example of this. The discipline is thriving, even though we have calculators, programming languages and GitHub Copilot to take on many subtasks. Instead, we work at a higher level of abstraction. Third, writing is the most important cultural achievement of the human species and will most likely remain so. Fifth, GenAI tools require writing. We could not work with ChatGPT if we were not able to write. Well, with multimodal modals we can speak into the model, but that's not the case for most users right now. Also, GenAI will force us to consume even more text in the near future because it's makes it easy to produce text. And through informal learning experiences, knowledge workers will constantly be exposed to better versions of their texts through large language models, which will hopefully make them think about how to learn from the model's recommendations.
The other question is whether we really want to go in that direction and make the claim, since we have very limited knowledge of what will happen in the future. Writing has a purpose in itself, and it is the best window into the minds of people who are long gone. If we lose our interest in writing, we cripple our cultural heritage. Rather, I would suggest that we should distinguish ephemeral writing from permanent writing. Examples of ephemeral writing are emails or chat messages that are often not read two weeks after they have been written. Examples of enduring writing are books and journal articles that become part of our cultural heritage. If GenAI lowers the value of ephemeral writing, that's perfectly fine with me. The moment it interferes with permanent writing, we should take action against it.
New whitepaper on ML Skill profiles
If you want to know what skill mix is required in a professional machine learning team, I highly recommend reading the whitepaper "ML Skill Profiles" published by my colleagues from the appliedAI Initiative Alexander Machado and Max Mynter in collaboration with many partners. In their text, they go into detail about ten roles and their skill profiles. Their secret recipe is to map these skill profiles to the different phases of the ML lifecycle, ranging from project planning to model deployment. For each skill profile, it is described in which phase of the ML lifecycle it becomes important and what tasks it has. This is a nifty tool for medium and larger companies to identify gaps in the skills composition of their team.
Thinking first versus LLMing first
The research of Robert Bjork, the famous educationalist, is extremely interesting. A key contribution of his work is that when we retrieve information, we promote the memory of this information. For example, if I list all the federal states in Germany, this act of retrieval strengthens my memory of these states. So if retrieval is a memory modifier, then we can make good use of it when using Large Language Models (LLMs). Most surveys show that knowledge workers use LLMs to generate ideas. The advantage of LLMs is also increased productivity. If we try to predict the output of LLMs before we see it, we can reap two benefits simultaneously. We can use the technology as a cognitive tool for learning and we can keep our productivity gains. Of course, this only applies to prompts where we have a realistic chance of retrieving what we already know. To illustrate this, let's look at two use cases. Suppose you want the LLM to explain how sugar is metabolized to fat when you eat too much of it. After you have written the prompt and before you press Enter, take a few seconds to think of an answer yourself. Then press enter and compare your answer with the result of the LLM. Or let's say you use the LLM to translate an email from German into English. You don't want to predict the entire email, but could instead predict the translation of a difficult word. Once you see the output, compare your translation with the result of the LLM. As tiny as this task may seem, it will add accumulate over time. If you do this 15 times a day for five working days, you'll end up with more than 3,000 retrieval attempts in a year. That's a huge learning opportunity. If this has piqued your interest, I recommend you read the article "Thinking first versus googling first: Preferences and consequences" by Saskia Giebl and colleagues, which explains the research behind it in detail.
We need to make the job losses caused by AI transparent for the next generation
Once again, there is new data on how young people think about AI and how it could affect their careers. The Knowledge Academy recently published a survey in which more than 2,0000 UK students were asked this question. When asked if they thought AI taking over jobs was a risk, a quarter said it was. Interestingly, when asked why jobs are being replaced, they named different reasons. Among other things, they stated that AI will replace manual tasks (22.4%), make tasks more efficient (17.2%), become cheaper than humans in the long term (14.4%) or complete tasks faster than humans (14.2%). Incidentally, this is not a youth-specific perspective. The fear of losing their job due to AI is perceived by a quarter of people regardless of their age. The least we can do is to give young people an accurate assessment of which tasks or even jobs will be replaced by AI. Of course, this is already happening, but not to the extent that many people imagine. If you want to learn more about this topic, read the article "19 statistics showing job losses due to automation in 2023".
The State of AI Talent 2024 report
The common metaphor for the movement of talent between ecosystems in highly competitive fields is the "war for talent". In my opinion, this metaphor is deeply misguided and also uses inappropriate language. Compared to war, no talent dies when it changes companies. Instead, I prefer to speak of the "dance of talent", which emphasizes the idea that there are differently attractive dance partners and that dance partners often intermingle. Another image I came across this week is that talent is "walking intellectual property", which is what makes it attractive in the first place. This play on words comes from the report "The State of AI Talent 2024", which unfortunately I only had access to a section of (the whole document costs £2,500). Still, the insights they offer with the free version of their paper are interesting enough. Here are a few things I took away from it:
Large companies attract a considerable amount of talent from the Big Five, such as Amazon, Google or Meta.
In Germany, the United Kingdom, the Netherlands or the Nordic countries, there is no pronounced brain drain. Other studies have also shown this in the past. However, the brain drain is not the same everywhere in Europe. Spain and Italy, for example, are currently experiencing a net outflow of talent.
While we often emphasize that Silicon Valley and China are a massive pull factor for talent, we also see national champions emerging in large economies, such as in the healthcare sector in Europe, where the number of AI talent has increased twenty-fold in the last ten years. This too should make us think about how we can use AI to enhance existing local industries.
Asia and Southeast Asia mainly attract talent from the US and not from Europe or other regions.
The document seems to be a treasure trove for people interested in the subject. The price is hefty, but perhaps worth it for some.