I have been using ChatGPT – an AI-powered language model developed by OpenAI – to mostly amuse myself to assess the extent of how it works. I have also been testing the depth of its knowledge on a range of subjects – and it has mostly satisfied.
Additionally, I have been secretly delighted – and highly amused – by its use of code switching between (human) languages (an output of my specific request). I asked ChatGPT (in English) to write me a plot of a Bollywood action drama film, with Bollywood Hindi and English dialogue. I was super impressed by the ability of ChatGPT to seamlessly switch between languages – which showcases its adaptability and potential for diverse applications.
I am already using a version of AI to augment my work and I see AI as a tool to enable me to be more effective and efficient with my work.
Many years ago, when we started rolling out applications and using the internet for work, it all seemed like magic. And I remember some of the conversations about how dangerous it all seemed and “the machines were taking over”. And some of us would remember Y2K (the year 2000 computer bug scare) – where there were several myths and false information, for example, that planes would magically fall out of the sky because machines behaved in ways we couldn’t predict, or there would be a global economic collapse because of banking system failures or there would be widespread power outages.
I am seeing the same type of hype with AI. Yes, just like with the internet and social media etc, there are dangerous elements to AI – mostly because human beings are involved. And where there are humans, there is fallibility.
But here is how I intend to use AI in my work.
1. I will continue to use AI for refining my writing. As an example, the first draft of this post was put through our internal ChatGPT, and the edits it recommended made my post better, for sure. And here is the ChatGPT feedback given for my post: “Overall, your LinkedIn post is well-written and provides an insightful perspective on the potential use of AI in your work.” This gave me a bigger thrill than if a human provided the same feedback. I am still processing why I feel that way.
2. Now here is one “requirement” for my future use of AI. When I coach people, my AI assistant will capture this information, summarise it and then create actions. My AI assistant will then:
Track the actions of these people to assess progress. For example, let’s say one of my people say they want to speak up more in meetings. The AI will track them (digitally and with their permission of course) to see if they are actually speaking up more – and I can see what the AI summarises versus what the person reports to me.
This will mean that my sessions will be more informative – and shaped by real data and not information coloured by the biases of the person. (eg, Person: I barely spoke up in that session. A1: Person spoke up 80% of the time and introduced random topics not related to the agenda). Speech coach already does some of this – but what it does is the tip of the iceberg compared to what I want to it to do to be really useful.
And yes, I know that there are ethical considerations and guidelines to be defined for how we use AI – for example, the use of data, algorithmic transparency, addressing of biases. But as I see it, where there are humans, there will always be challenges. Our current use of technology and social media has already proven that. But this does not mean we should fear the unknown. We just learn more about AI (and not be fearful of it)), and try to shape our world more meaningfully.