This is Why Businesses Fail at Digital Transformation and AI Implementation – Interview with Dr. Jack Hong, Co-Founder of Research Room

Data-driven decision-making and digital transformations can be some of the most trying aspects of running a business as many organisations struggle to collect and leverage their data effectively. For this month’s member interview, we had an extremely informative chat with Dr. Jack Hong, Co-founder of Research Room, a company that helps organisations build AI solutions that can predict and solve imminent issues. Through this session, we discover how the company leverages formal scientific techniques and creative business strategies to assist SMEs to uncover their hidden values to reach their full potential:

Can you tell us more about your background and your current work?

I am an Adjunct Faculty at Singapore Management University, where I teach Digital Transformation, Corporate Finance, and Data Science at the undergraduate and postgraduate levels.

My time is split between educating our next generation of leaders and designing AI solutions for corporations. As the Artificial Intelligence (AI) and Technology advisor for Certis Group and Vertex Holdings, I continuously advance these organisations’ AI capabilities and push them towards delivering new value propositions in their respective industries.

At Research Room, we incubate state-of-the-art AI technologies in asset management, resource optimisation, and intricate pattern recognition. Our latest obsessions revolve around natural language understanding (e.g., conversational AI) and computer vision (e.g., detecting pests). Precious information that can give organisations an edge over their competitors is hidden in numeric data, textual, image, and video. Non-numeric data, also known as unstructured or big data, has dwarfed its numeric counterpart in the digital age. Many of those who engage our services require deriving actionable insights and active interventions based on such data.

Do you think most companies are now able to leverage their data to gain actionable insights fully or are we still far from a fully optimised way of using data?

We are both far and close to the desired objective. The first question for any business leader is not to ask about data and technology, but how to re-position his business strategy using the new drivers of the digital age. Let me explain.

Businesses often think that they require new technologies or access to large volumes of data – a mistake that results in numerous reports presenting the low success rates of Digital and AI Transformation. I believe that figure is around 15%.

In reality, what organisations need to do first is to rephrase the business questions they are asking. Consumers today are empowered by information to make decisions independently. They are no longer convinced by what businesses say. Business owners have to switch their mindset from “what the company can do” to “what their customers need” before even thinking about how to orchestrate their resources to deliver new customer value propositions. Start the thinking process by asking the right questions about customer value propositions, and the solution usually follows instinctively. Only then do we evaluate data and digital technologies as enablers and how they should be appropriated.

Unfortunately, businesses have not elevated from a ‘company at the core’ to ‘customer at the core’ thinking. ‘A company at the core’ mindset traps business owners into thinking like middlemen, where solutions must be provisioned through their efforts. This mistake blinds them to another powerful business model in the digital age, the ecosystem. In an ecosystem, business owners focus on orchestrating or thriving in an environment that facilitates transactions between consumers of needs and producers of said needs. Such an environment would avail data at a personal level to the ecosystem orchestrator, allowing them to personalise, shape, and create new value propositions. Without setting their sights on business objectives in this manner, even the most powerful AI will fail to deliver good value. When we look at any project from the lens of AI practitioners, we combine data, economic theory, business thinking, intuition, and rapid experimentation to uncover what the market needs. We then distill these insights to evaluate the best among candidate solutions before deployment.

What industries are doing particularly well when it comes to analysing data?

It’s less about a specific industry and more about the company itself. Digital mastery varies widely between companies in the same sectors. The ones that have and will continue to succeed have mastered the new business drivers, including the effective orchestration of ecosystem business models and the ability to leverage data and AI.

As the first step to being a digital master, a company should consider increasing customer types in their business model. In any effective ecosystem, there should be a sufficient number of incentivized producers to transact with consumers and vice-versa. The company should also consider engaging vital partners to provide the tools to encourage said transactions to take place. Once these components are connected, the flow of data becomes vital information that businesses can use to analyse and continuously create new value propositions to keep ahead of the competition.

One of the fields you have been working on is predictive analytics. How has this field evolved over the past few years, and what excites you the most about the future?

If we were to ask this question a few years ago, predictive analytics more often than not refer to a class of techniques that fall under the umbrella of machine learning. Traditional statistics have served us well for a long time, but there was a limitation to its predictive powers. Machine learning changed that.

We also saw an emergence of a class of people called data scientists, differing from data analysts who focus on packaging information for management decision making. Data scientists are supposed to predict the future – and the more effective ones can derive surprising insights that facilitate active interventions in business scopes that were not known to domain experts before.

Today, data scientists are not only required to master machine learning but deep learning as well. Modeled after how brains work, deep learning models can learn ultra-complex relationships within data to solve the problems that we thought impossible. Deep learning is crucial if we want to understand unstructured data such as images and language. Language itself is extraordinarily complex and noisy. Traditional machine learning methods use a bag of words approach, where we count the frequency of words in an article and use that information to predict sentiments or topics. Deep learning models today can accurately identify contexts by learning the relationships between and within sequences of words.

Take the implementation of a massively large language model known as the GPT3 (Generative Pre-trained Transformer 3) by OpenAI, for example, which can answer questions with minimal to no training. You can ask it things such as “What is interesting at Gardens by the Bay?”. It will reply to you with suggestions such as the ‘Supertrees’ and other facilities and helpful information such as the opening hours. You can say that you brought a dog in one sentence, and future recommendations by GPT3 will consider whether the location allows pets. You can talk to it as you would to a live customer service officer. I have to emphasis once again that this is all without training. That’s not all. GPT3 can also extract data from text and represent them in a tabular form automatically. It can carry out translations from one language to another and summarise articles. It can translate codes from one language to another – e.g., C to Python. It can even be prompted to create a website for a user that mimics Facebook, complete with HTML codes and content.

Due to its open-source nature and strong support by Tech giants such as Google and Facebook, AI technology is readily available to anyone who knows how to wield it. This, however, also comes with risks. We can broadly categorise these risks as either intended (in the case of fake news or fraud) or even more worrying, unintended harm. An example of the latter is gender discrimination, which exists even in transformer models. Today, an AI model simply predicts the most probable answers given the data it has seen before, without any notion of ethics or values. Thus, financial organisations may end up profiling their customers using discriminatory and not predictive factors, perpetuating a state of unequal access to financial services and, eventually, unequal opportunities in life.

Can predictive analytics help us prevent future global pandemics?

In short, I would say yes, but it’s more complicated than that.

Our predictive approach to the current covid-19 pandemic is based on historical data on past pandemics. However, as the battle with covid-19 dragged on, we observed an increasing drift between actual and predicted outcomes. Eventually, we needed humans to step in and make corrections to the models.

Moving forward, rather than relying solely on machines, we need to focus on the symbiotic relationship between man and machine. Machines are exceptionally good at what we tell them to do, but qualities of ethics, empathy, and human assurances are something that we as humans need to own.

Deep learning, for example, can predict cancer cells with near-perfect accuracy, but we still require a domain expert to audit the result and subsequently convey the bad news to the patient and their families. No patients will accept their fate from a black box. Even for a minor issue like insurance claims, I would not feel assured talking to a bot while stranded overseas with my luggage and wallet stolen.

Could AI be used in a harmful or unethical way?

We’re already seeing this happening.

Social media companies make money through impressions and clicks. By telling the machine to learn how browsing patterns can increase revenue, what we get from social media recommendations today is curated information that got us addicted to our screens. What is worrying is that the machine does not care what it shows you, as long as it can make you stay on your screen longer every day. As a result, everyone sees a different truth depending on what makes them click, pun intended, and this has caused high levels of polarisation and teen suicides within today’s society. I encourage you to watch the Netflix documentary “The Social Dilemma” if you have not.

I advocate the use of AI to encourage positive behavioral changes. For example, having the insurance industry use dynamic premium pricing based on individual behaviours over time. If one exercises regularly or drives responsibly, they should enjoy falling premiums month on month.

AI capabilities have been hyped for years, but many say that the technology is developing slower than expected and is failing to fulfill its promise. An example commonly referred to is in the case of autonomous vehicles. What are your thoughts? Do you think we are going through an AI winter?

No. AI capabilities have been accelerating, and I barely have time to catch up on the progress. What’s slowing down are the innovative business use-cases of AI. For a long time, companies have been asking the wrong questions about AI adoption. They either see AI as a workforce replacement or another piece of digital replacement. It’s much harder and time-consuming to create new business propositions with AI. This gives the illusion that AI capabilities have been slowing down. In reality, for companies who have mastered the new digital drivers, they have more opportunities than the resources they can muster.

Autonomous vehicle is another complex issue by itself. Most cars on the roads today are L2 autonomous, capable of lane tracing, and intelligent cruising. Tesla is amongst the most famous for L3 autonomous vehicles, which uses sense-making camera technology. Even though computer vision is very accurate today, we still observe shortcomings such as mistaking a partially vandalised stop sign as a 15 kmh speed limit sign. Uber got into a fatal accident with a cyclist in Arizona in 2018. The cyclist was pushing her bicycle across a four-lane road, and the AI repeatedly failed to recognise this object as a human cyclist when it came into view. Eventually, the courts charged the back-up driver of the self-driving car with negligent homicide as she did not keep her eyes on the road and hands on the wheel as contracted. I believe that the responsibility still lies on the driver’s side in the L3 autonomous environment, and this may result in some hesitation towards the speed of adoption of L3 autonomous vehicles.

How far are we to AI singularity?

Based on the AI methodologies that exist today, I would say that we are far from reaching this goal. The architecture and mathematics of state-of-the-art deep learning are not new – essentially conceptualised in the 1950s and improved in the 1980s. Without the explosion of data and cheaper technology in the digital age, these techniques would still be buried under decades of literature. We require a breakthrough that will change the current ways in which we build our AI.

The GPT-3 language model has the same architecture as its predecessor, the GPT-2. The only difference was that GPT-3 was a much larger model, with 175 billion parameters trained on more data – an exercise that costs 4.6 Million USD. Shortly after GPT-3, Google introduced a 1 trillion parameter model called the Switch Transformer. The performance of AI models today is being driven by access to more hardware. OpenAI had access to 10,000 Tesla V100 graphics cards, each costing USD 11,000, to train the GPT-3 model. What we have achieved today is to have reached the limits of narrow AI. The industry is still pondering what kind of architectural shift do we need to make our first step into the realm of general AI (AGI). We may observe a pseudo-AI singularity at the limits of AGI, but true AI singularity can happen only in the next frontier – Super AI.

On a lighter note, what do you enjoy the most about DOOR XXV?

The drinks! Especially at the end of a stressful week.

The fertile environment where members can mingle and put down their defences over drinks are also two highly positive points about the bar. Mark is also so much more than a bartender; he has excellent knowledge and profound connections.

How has DOOR XXV facilitated conversations and connections for you as a member?

We’ve several fruitful discussions with many companies there. I would have to say that a lot of idea- generation was made at the bar.

What’s your favourite drink at DOOR XXV?

I would have to say that my go-to has always been whiskey. GlenDronach if I would have to choose one.

How has your experience with SPECTRUM been so far?

I have loved being here from the get-go.

The facilities are excellent, but it is the people that make this place so conducive and a great hotbed for conversations, connections, and growth.

You can connect with Jack via [email protected] to know more about Research Room and the work they do. As always, you can keep up-to-date with what is going on around SPECTRUM here.