Navigating through a data-fluent ecosystem

The COVID-19 pandemic has intensified organisations’ digital transformation, pushing them to digitise their operations, restructure their business models, facilitate access to data, and work on their workforce’s skill-up-gradation. This data-driven revolution is shifting business analysts’ focus to a plethora of new possibilities, often supporting the efficiency parameters. The development of a Machine Learning program asks for data as a fundamental input variable. The data gets used in the learning phase, and the system then becomes capable of making a decision based on that data. The effectiveness of data-driven decision-making has been a well-successful business case. Even the organisations that have previously kept themselves away from digitisation rounds have started working on a long-term data strategy. The current ecosystem has put forth the need to be data fluent. Organisations around the globe are improving their technical capacity and becoming more data-driven in their operations. The workforce needs to skill themselves with the new set of requirements that will follow this revolution.

According to Nature, DeepMind used deep learning to discover how proteins fold—a problem that has baffled biologists for years. Google and Facebook have redefined the advertisement sector, leveraging data and machine learning to improve click-through rates. OpenAI has developed GPT-3, a natural language generation system that provides intelligence to the next generation of computer applications. The New York Times can generate tweets, translate languages, summarise emails, write poetry, and even write its computer programs.
Today, more data is becoming available, computational power is growing, and statistical methods are becoming sophisticated. The confluence of data diversity, storage capabilities, algorithmic efficiency, and computing resources availability has now paved the way for a surge of innovative disruption. As we advance, the rise in the given trend will open-up new opportunities as well as challenges. Some of these trends include:

Rise in the concerns around data-privacy: Lately, there has been a blatant use of personal, often sensitive data by various companies to either train their algorithms for better results or produce any outcome that may have a direct effect on the consumers. Businesses use data variables like internet-browsing patterns, location logs, etc., to display more relevant advertisements. The idea of using one’s data to make decisions that may affect the life of that individual in many ways is often scary, if not beneficial. Various governments have already taken strict actions against the data-storing practices of organisations. This trend may be envisioned to rise, with more corporations, policymakers, academicians, and government institutions taking respective steps in the form of policies, frameworks, laws, or even debates.

Scalable Machine Learning operations to garner attention: The paradigm of a data-fluent architecture will follow sophisticated machine learning models’ operationalisation. To efficiently deploy a machine-learning program, it is essential to maintain an agile framework of processes. An organised lifecycle for continuous improvement can take care of the aspect of scalability for future iterations.

Visualisations to go mainstream: With the data scientists producing results that require the respective audience’s attention, the ease of understanding these results may rattle the designer’s creativity and aptitude. Charts and graphs often form an easily comprehendible form of data-based engineering output. Data visualisations are often the form of data that comes in a presentable condition. It encompasses the art of choosing the right set of charts to display an effective outcome out of the statistics-heavy processes.

Considerations on Algorithmic bias: Algorithmic bias is the systematic error in the computational results that create a lack of fairness in the process. Preferring male candidates over female ones, based on the (mostly) male candidates’ input data in some form, has been an example of such a bias lately. In the upcoming times, algorithms will get ubiquitous and play a crucial role in our lives. Many of our decisions will get influenced by algorithms encompassing the variety of digital systems, making algorithmic-bias a critical issue to ponder upon. Biased decisions by the digital systems may offer a privilege to a group of users over others and may exclude a section of the society out of the encompassing benefits. The algorithms get trained on data sets, and very often, these data sets are not even adequately labelled. Algorithms get better at a task when trained with more data, but the training data is usually produced with less accuracy, creating a fundamental bias to take shape.

Rise of the No-code and Low-code platforms: No-code or Low-code is a development environment to create software through graphical user interfaces (often drag-and-drop) instead of coding. Gartner forecasts that by 2024 75% of large enterprises will be using at least four low-code dev tools for both IT app dev and citizen dev initiatives, and more than 65% of the app development in 2024 will be from low-code solutions. The rise of data-driven capabilities has compelled the departments that have previously ignored its applications to adopt the same. On the other hand, the growing dependence on computer-professionals has given rise to the business case of platforms that do not require a deeper understanding of the computing concepts to operate. These solutions will overcome the dependence on computer engineers and offer others a chance to contribute to the upcoming revolution.

Interdisciplinary debates: For long, advanced computational processes have been designed and developed by professionals with a computer science and engineering background. The diverse computing mechanisms have started to influence other domains, including economics, sociology, psychology, etc. There is a need for interdisciplinary debates to take form so that computing’s social science aspects may be better understood. There will be a better chance for the future of technologies like Artificial Intelligence to take a beneficial stand if we estimate the implications from the point-of-view of diverse societal strands.

About the author: Vedang R. Vatsa is a Fellow of the Royal Society for the Encouragement of Arts, Manufactures and Commerce. He is a recent Young Researcher awardee and holds MTech and MBA degrees. He has represented the Indian delegation at various national and international stages. With 10+ years of academic and professional learnings, he currently works as an IT and Management Consultant.