Authors:
Claire Thomas Gaggiotti & Matteo Aragone
2021 was a challenging year all round, with the economic and social implications of a continuing pandemic and intense climate change summits. A new year is an opportunity to look forward with optimism to new trends and initiatives.
In this article we’ll present what we feel are the six main trends and buzzwords for 2022 in digital technologies and AI.
- Data-driven culture
- Responsible AI
- Explainable AI
- AI for Sustainability
- Citizen data science
- Data quality
1. Data-driven culture
Practically all organizations aspire to becoming data-driven, but according to a survey conducted by Forbes contributor, Randy Bean, only a quarter of executives feel they have succeeded in bringing about this shift, stating that the problem is cultural, not technological.
Apart from the commitment of all levels of management, and sufficient budget and resources, it all comes down to a question of trust. Many managers are used to taking decisions based on gut feelings, which have probably mostly served them well so far, and are afraid of losing control to geeky data scientists. Many are also wary of the upheaval of implementing new systems and reviewing all current processes.
So it’s vital to choose the right tools, which will provide quality insight, while promoting collaboration and cooperation, such as between business and IT experts. When decision makers realize they are receiving data-driven support, and not losing their prowess to indecipherable algorithms, many more organizations will be able to successfully embrace a data-driven culture.
The process is not instantaneous: it requires investment in training – be it technical or not – in people, and, above all, in the necessary time to onboard all stakeholders. But it delivers results in terms of quality of work and improvements to process management.
2. Responsible AI
As AI permeates our everyday life, recommending films to watch, products to buy, and books to read, AI tends to be seen as a fun technology, which can improve the quality of our free time.
However, AI is also used in far more serious scenarios, such as deciding whether you’re likely to pay back a loan, or make legitimate insurance claims! Responsible or ethical AI deals with those cases where AI is used to define the moral behavior of people.
It’s not a new concept, in 2019 Europe had already produced guidelines on ethics in Artificial Intelligence, but as more cases of unfair bias come to light, including machine bias against blacks in US court sentencing and racial bias in healthcare, and hot topics of diversity and inclusion continue to demand attention, responsible AI is a major buzzword.
Although many industries are already putting ethical AI policies in place (89% of industrial manufacturers in 2021), implementing policies is a real challenge.
The initial data used to train algorithms must be free of bias, and “data scientists lack the training, experience, and business needs to determine which of the incompatible metrics for fairness are appropriate,” (Reid Blackman, CEO Virtue).
The rules produced by AI algorithms must then be closely analyzed to remove bias before being applied to new data, which means they must be clear and legible, and this brings us on to the next buzzword and big trend of 2022: Explainable AI.
3. Explainable AI
The general concept of AI is that it is extremely complicated and intrinsically inexplicable. This was certainly true in the not so-distant past, when AI algorithms were mostly black-box, producing indecipherable mathematical equations. But as we have already discussed, responsible AI means providing transparent explanations of fair play, which is often a legal requirement, so the need for transparency goes hand-in-hand with the need to ethically mitigate bias.
According to Forrester, 20% of enterprises will already rely on explainable AI in 2022, and this percentage is destined to increase vertiginously. Are there explainable AI solutions on the market ready to meet these needs, where performance is not downgraded, and additional layers of software are not required to make their results comprehensible?
Luckily the answer is yes. One of the main players in this field of innately transparent algorithms is Rulex’s Logic Learning Machine (LLM), first developed in 2014 by expert mathematician Marco Muselli. LLM produces results as totally legible if-then rules, and yet has the computational speed and accuracy of a black box solution.
4. AI for Sustainability
It is a great relief to add sustainability to the list of buzzwords for 2022, as the health and wellbeing of future generations depend on it.
In 2022 corporate sustainability is very much focused on the concrete actions which companies can take to combat climate change by reducing their carbon footprint. Over 200 of the world’s largest companies have already taken The Climate Pledge, with the aim to reduce carbon emissions to net zero by 2040.
But what is the role of digital technologies in this pledge?
According to The Royal Society report “Harnessing computing to achieve new zero”, digital technologies, such as machine learning and AI, could deliver nearly one third of the carbon emission reductions required by 2030.
According to the United States Environmental Protection Agency, the transportation sector is the main culprit. In 2019 it was responsible for 29% of greenhouse gas emissions, and 83% of these emissions came from small and large trucks.
There are long-term plans to reduce emissions through the adoption of electric and hydrogen trucks and biofuel, but in the short-term, digital technologies can already do a great deal by simply reducing the number of trucks on the road. This can be achieved by identifying the best routes and travel times and maximizing truck loading.
Rulex Axellerate is an innovative example of how transportation can be optimized, as it is able to consider both current and future orders, to not only identify the best shipment time, but also combine deliveries, consequently filling all trucks to their maximum capacity, and reducing the number of trucks and their carbon emissions.
5. Citizen data science
Processing small amounts of data on two or three spreadsheets is within most people’s reach, but aggregating large quantities of data, processing with predictive modelling, and process optimization involves bringing in highly specialized data scientists and data analysts. These profiles use traditionally complex tools, often based on programming languages like SQL, R, Python, and C++, which go far beyond the average business expert’s remit.
As a result, a series of no-code tools (including the Rulex Platform) have appeared in recent years, enabling advanced operations (data pre-processing, machine learning, optimization…) using visuals such as workflows. These tools are simple and intuitive enough to be used by business profiles (also called citizen data scientists or citizen developers), but are powerful enough to be appreciated by data scientists, data analysts and IT personnel.
They enable different company departments to work together, with significant improvements in terms of results and time savings – amounting to 4.6X productivity gain over traditional programming according to research by No-Code Census.
And companies have taken note: the low code market is growing annually by more than 20% , according to estimates by Gartner.
6. Data quality
We’ve covered many different aspects so far: corporate culture, processes, technologies, tools… but one obvious and utterly essential prerequisite is missing… What’s needed are data, specifically, clean and reliable data.
Two years of Covid have taught us the importance of clean and consistent data. We’ve all read data-driven conflicting predictions by scientists, physicians and research institutes. There are many reasons for this: data may be inconsistent (based on different assumptions, or samples), or “dirty” (incomplete, with input errors) and therefore difficult to read. The lower its quality, the more data is open to different interpretations, and consequent incorrect predictions.
A similar scenario applies to corporate data. Bad data can lead to inefficient operations, lost profits and mistakes in strategic decision making. It’s estimated that large US enterprises lose 15 million dollars a year because of bad data. So it’s essential to prioritize the data quality process, from data creation to storage, guaranteeing accuracy, accessibility, completeness, consistency, validity, and uniqueness. The road ahead is clear: according to Gartner, by the end of 2022 an impressive 70% of companies will be closely monitoring the quality of their data.
One of the most innovative aspects is without doubt augmented data quality, which makes it possible to suggest automatic adjustments and correct data. Rulex RDC, for example, tracks down data inconsistencies even where correlations seem impossible to find, and even suggests data corrections that business experts can accept, refuse or improve.