Are you sick of articles about AI yet? Whether it’s ChatGPT, Google’s Bard or stories about Bing going rogue, commentary about generative AIs and their implications are abundant. It’s not just practitionersor consultants either, academics and academic journals are rushing to get articles and research out about these emerging technologies. In many ways, this rush of attention makes sense. ChatGPT recorded over 100 million users within two months of its public launch, making it one of the fa
fastest-growing user bases on record. Yet the amount of attention, impact and focus these tools have generated is also somewhat surprising, given most are still in developmental phases. There are already hundreds of ‘definitive guides’ about how to use these tools, yet the tools aren’t even fully built or launched. As an example, GPT4 has just launched, which will affect how paying users interact with ChatGPT and its APIs, making these ‘definitive guides’ already outdated.
All this coverage makes it hard to clearly see what the impact of generative AI tools will be for individualbusinesses. Some articles seem to hail generative AI as a complete game changer with the potential to affect or even replace human functions. Others focus on the current limitations and the errors they make, suggesting they’re overhyped and won’t have that much impact after all. This is particularly evident across education sectors, where some educators and institutions are embracing generative AI while others seek to ban it. Retailers are grappling with this as well, trying to unpack how AI tools might affect staff.
When strong competing positions like this exist, the answer is usually somewhere in the middle. Take Amazon’s arrival in Australia as an example. A few years back, some predicted Amazon would be the death of Australian retail as we know it, while others predicted the company would fail due to the different circumstances locally. A few years on, has Amazon completely decimated Australian retail? No, our retailers are still here (despite a pandemic) and some have recently had record results. So has Amazon failed and left? Also no, its sales rose strongly last year, in line with additional infrastructure.
That’s just one example of a major trend that attracted a lot of competing predictions, but in the end neither destroyed an entire industry nor fizzled out. The truth was somewhere in the middle. Sound familiar? We think so. So with colleagues from Swinburne University of Technology we recently published a peer-reviewed academic article exploring the competing views on generative AI to see if we could find this middle ground. We approached it through the lens of multiple paradoxes that generative AIs present. While our focus in that specific paper was education, the paradoxes are relevant to all industries.
Generative AI is a friend yet a foe
The first paradox we focused on was the role of generative AI tools as both a friend and a foe. Generative AIs are a friend in that they facilitate knowledge acquisition through human-like interaction at a remarkably fast speed. It would take substantial time to scour the internet yourself for the answers AI tools are able to provide almost instantly. Tools like ChatGPT are extremely powerful at elevating knowledge; sort of like having a friend who knows a lot about almost everything who is available 24/7. The almost human-like interactions users can have with these tools adds to the perceptions that AI is a friend. Are you one of those users that thanks ChatGPT, or maybe your Google, Siri or Alexa assistant, when they give you an answer? If so, you’re probably already seeing these tools as more of a friend than an algorithm searching and summarising data.
Yet, if you had a friend that you relied on for their knowledge, and they regularly gave you the wrong answers – would you still be friends? Or what if that friend claimed an idea was their own and it turned out they got it somewhere else? This is the reality of generative AI tools – at least in their current state. While ChatGPT’s underlying model is set to drastically expand by the time this goes to print (hello GPT4), there are still limits to the information these tools can access. Even with more data, these tools will still be limited because they aren’t actually creating anything new; rather, they are summarising or synthesisinginformation using predictive modelling. Depending on the task, that can be really beneficial or highly dangerous.
For instance, if you’re moving into a new internal team and need to upskill on the jargon and frameworks,then generative AI could be your best friend. Yet if you ask it to write copy for an email, or generate images for a campaign, then you’re likely to get something at best generic, and at worst totally ripped off from someone else. So the distinction between friend and foe comes down to how you use these tools– which leads to the second paradox.
Generative AI is capable yet dependent
Need inspiration for a campaign? AI can generate dozens of options almost immediately. Remember our point above that they might be generic or copied, but at least they might spark an idea that leads you to the solution.
This highlights an important note about these tools. While they appear highly capable, and even confident, in the answers or solutions they provide, the quality and accuracy of the responses varies. Because of their predictive nature, they occasionally provide false information but still defend the answers when they do. You might have seen the viral interaction with Microsoft’s Bing AI around what year it is as an example of this. So while these tools appear highly capable, they are still very dependent on the way users interact with them.
Users who think of AI tools as a starting point, rather than the solution, are having the most success. In fact, this is how we are now training our students to use these tools – not to generate their assignment but rather, as a useful starting point to build on, finesse or apply in a more specific context. In otherwords, having access to these tools can be a real advantage, but only if you learn to use them effectively.
Generative AI is accessible yet restrictive
Speaking of access, one of the reasons generative AI tools have gathered so much attention is that anyone has been able to sign up and start using them. ChatGPT’s meteoric rise in users highlights this effect, and directly relates to the ‘Open’ part of ChatGPT creator OpenAI’s mission.
Yet as access has quickly increased, so too has the load on the tools and their servers. A full 100 million people trying to ask ChatGPT all the secrets of the universe is a lot for any tool to handle. We saw tools like ChatGPT and Bing implement limits on the number and frequency of questions users could make as temporary solutions, and predictably this has already turned into new paid versions that allow additionalusage and priority access. GPT4 has just launched but is currently available only to paid users.
The paradox this generates is around access compared with restrictions. The promise of these tools is the access they provide to a wealth of information and outputs. Yet as more people try to access them, they become increasingly restrictive – or they prioritise some over others. This leads us to our final paradox.
Generative AI gets even more popular when banned
As generative AI gets more popular, more scrutiny has been placed on the implications of these tools – not just for what they do, but how they do it. Questions around IP and ethics have been raised and are yet to be clearly answered, as are concerns around potential bias. This has been a big topic within education, with questions around how these tools affect the integrity of learning. In response, many education institutions have tried to limit or ban the use of these tools. Businesses are now grappling with a similar challenge – when and how to allow or restrict use of Generative AI. Yet, as we’ve seen throughout history with other banned products, the main effect has been to increase interest and use. Soorganisations need to think carefully about any restrictions they might impose on the use of these tools, as they just might have the opposite of the intended effect.
What this means for retailers
While our paper was originally focused on education, the paradoxes we unearthed cross over to a broader discussion about the impact of these tools in retail and beyond. By describing these paradoxes, we hope to present a balanced view between those predicting generative AI will change everything,and those who say their limitations mean they won’t impact things at all. Like us, you and many of your colleagues are likely already experimenting with these tools, and some will have already embedded them into workflows. Or, some of you may be actively avoiding these tools for a variety of reasons. Eitherway, our key message is this: we can’t ignore these tools or the impact they’ll have, but we shouldn’t panic either.
While generative AI can be a friend, the tools are still dependent on how you use them – the questions you ask, and what you choose to do with the answers. So consider them another potential tool in your toolbox – but don’t throw out the others just yet.
This story first appeared in the May 2023 issue of Inside Retail Asia Magazine.