Connect with us

My Weekly Preview

Retrofitting rules in an AI world

News

Retrofitting rules in an AI world

Artificial intelligence chatbots have changed the way students learn. But will that make them less-informed adults? WORDS: Lucinda Dean.

We got our first home computer in 1983: a BBC model B, a British make as my family was living in England then. I’ll never forget the excitement of that day. My 10-year-old self was thrilled at the prospect of simply typing my school assignment questions into this neat little machine, which would then spit out the answers and do my homework automatically for me. I was crestfallen when dad told me that was not the way computers worked.

Forty years later, what I had envisaged then is now a reality. When AI chatbots ChatGPT and Bing Chat exploded onto the scene like a technology supernova in November 2022 and May this year respectively, they generated much, well, chatter about the pros and cons of their widespread adoption in schools, universities and workplaces.

When I was at Uni in the early 1990s I scrambled for physical books in the library stack to complete my assignments. In my first year as an undergrad, I hand-wrote my essays until I was told to submit them typed. I’d finished Uni before the birth of Google.

Throughout my career though, Google has been an indispensable research tool. But have AI chatbots become “next gen” search engines?

ChatGPT and Bing Chat are free online tools that can swiftly conduct research and even write longer form pieces such as essays and creative short stories.

My 17-year-old stepson Marcus, who’s in his final year at a state high school, gave me a demonstration of Bing Chat. My mind was blown that it generated text answers to question prompts in minutes. It also cites sources, which theoretically should make it relatively easy to fact check.

Australian schools and universities have responded variously to these super-charged AI chatbots. Five states, including Queensland, have recently banned ChatGPT in public schools.

Fears were raised at a tertiary level that using AI chatbots was tantamount to academic dishonesty and plagiarism. It could make students lazy and bereft of original ideas or critical thought.

The University of Sydney’s academic integrity policy regards generating content using AI as a form of cheating; whereas Flinders University (while concerned about the difficulty of detecting AI generated text) aims to leverage ChatGPT to support learning rather than ban it.

Marcus says he mainly uses Bing Chat at school and so too do his peers.

“I would say the majority of kids are using it, at least just a little, whether that’s to write an entire essay or just to give them direction and ideas.”

One of the benefits, according to Marcus, is it helps with cohesion when writing short stories for English assessment.

“If what I’ve written doesn’t sound right, I can plug it in and ask it to rewrite it, so it flows better.”

Sunshine Coast Grammar Principal Anna Owen said Covid lockdowns forced the school to reimagine the relationship between knowledge, the student and the teacher. She sees ChatGPT as an extension of this lesson rather than as the new disruptor.

According to Mrs Owen, academic integrity, authorship and ownership in these early days of AI chatbots was the school’s biggest challenge.

“Students are not gullible, however, they are innocent,” she says. “If they read something, they tend to believe it. An important skill for young people is to be open to multiple opinions, debate and sources of information, mentoring and teaching. The students of today need to learn to be critical consumers of all forms of data and information, and make decisions based on ethics, values and empathy.”

Marcus, however, is ambivalent about whether students can be trusted to use the tool conscientiously.

“I would say there is a mix of students who use it really well and those who just paste a question in and copy the first response without reading over it.”

He said there was a danger that if younger kids get access to AI chatbots before developing the fundamentals of critical thinking, it could stunt their intellectual growth because they think AI would do the thinking for them.

“If younger kids start to use it really early on without teachers showing them the correct way to use it then there is absolutely some concern to be had.”

Owner and director of Cloud Clicks Johannes Klupfel says his Sunshine Coast-based digital marketing agency has been using AI tools since at least 2020 and is now using ChatGPT to generate ideas for blog posts and advertising copy and is also experimenting with Midjourney, another generative AI tool which creates images from natural language descriptions called “prompts”. So far, the agency has only used a handful of AI-generated images in its client work.

Mr Klupfel says ChatGPT does not live up to the hype.

“Usually, it takes a few prompts to get good output and then you have to edit the output to make it usable,” he says. “We use it for outlines, for competitor analysis, and now, with access to plugins and the web, a whole new horizon has opened up.

“Another way to use ChatGPT is to put content together once you’ve done your research. For example, I might have half a page of notes, and I know what I want to say, I can then just drop it into Chat GPT and give some additional instructions, and it gives me pretty good output. I then spend a little more time cleaning it up and I end up with a decent email or blog post.”

Mr Klupfel warns these tools are still in Beta, which means they may or may not work correctly, and that there are limitations to this technology at present.

“If you have good input data, say from the ABS [Australian Bureau of Statistics], it can pull facts out of that for you, but if you want to use it for customer research it gets more complicated and more nuanced, and I don’t think AI is at a point where it can do this for you.

“It can do parts of it, but it still needs more time. And I am not sure it will ever be able to replace actual humans. I think the future is human ingenuity plus AI.”

In a very short time, we’ve gone from living with AI to co-piloting AI in the workplace, so how ready is the Australian labour market for this paradigm shift?

According to a report by Microsoft, the creators of Bing Chat, 49% of people say they’re worried AI will replace their jobs but even more – 70% – would delegate as much as possible to AI to lessen their workloads.

The Work Trend Index survey (2023) surveyed 31,000 people in 31 countries (including Australia) and analysed trillions of Microsoft 365 productivity signals, along with labour trends from the LinkedIn Economic Graph.

Mr Kulpfel says he thinks a lot of people will struggle with the concept of AI and be afraid of it but there will also be those who embrace it as a new tool, which will help them stand out and do better work.

But does Mr Klupfel see a future where AI will displace human employees?

“It will certainly happen. Initially, it will replace lower-value work, that is, anything you would outsource overseas. We already use AI and machine learning tools that replace hours of number crunching, and this trend will continue.

“At the end of the day though, I think you will always need human oversight and direction,” he says.

It may (or may not) surprise you to know that this story was in part AI generated – can you tell?

The lowdown on AI 

Q: What is AI?
AI stands for Artificial Intelligence. It refers to the simulation of human intelligence in machines that are programmed to think and act like humans. It works by using algorithms and statistical models to analyse and draw inferences from data to make predictions or decisions.

If you unlock your phone with face ID or find your social media news feeds dishing up content based on what you like, then you are living with and interacting with AI.

Q: Can AI outsmart humans?
While AI can assist with decision-making when the data, parameters and variables involved are beyond human comprehension, it fails to capture and respond to quintessentially human factors that go into real-life decision-making such as ethics, morals and emotion. In other words, while AI mimics human intelligence it does not think and feel emotions like a human being. Yet…

Q: Is AI dangerous?
Like any technology, AI can be used for both good and bad purposes. It’s important to ensure that AI is developed and used responsibly.

Q: What is the future of AI?
The future of AI is exciting and full of possibilities. As technology continues to advance, we can expect to see more sophisticated and powerful AI systems that can help us solve complex problems and improve our lives.

More in News

Our Sister Publications

Sunshine Coast News Your Time Magazine Salt Magazine
To Top