Lots of news has been circulating about the impact of artificial intelligence (AI) chatbots in today’s society. Chatbots are able to provide an easy companion to anyone who may want one, they can help in several fields of work such as customer service, and, notably, they also happen to give the perfect way for a student to cheat on their work.
Whilst the information AI chatbots can give us is useful, conversational, and can be relatively harmless, their ability to replicate a human and give precise, high-quality answers spells trouble for the academic field.
Academia ought to be based on one’s own merit in order to succeed. As AI chatbots offer a way for a person to produce work entirely through a third-party system, how will we stop these bots from being used to cheat and undermine the all-important academic system?
How Are Bots Being Used To Cheat In Essays?
AI robots could become a useful asset in the future of education, but bots also provide a dangerously simple way that students can cheat in essays and other pieces of written work.
Whilst students using the internet to cheat on their work is not any new revelation, it’s been happening for years, AI bots provide a totally new and less detectable way of cheating.
As opposed to just using the internet to google the right answers or download a ready-made essay from the web, AI technology can give a student a completely original, high-quality piece of work.
This technology is what is known as a large language model. It can be given a prompt e.g. an essay question, you hit return, and you will be presented with a unique piece of text to hand on to your teacher’s desk.
AI bot apps are often easy to use and cost nothing (though the better the app, the higher the fee may be). So, students can get the pieces of work they need in just a few simple clicks and their teachers may be none the wiser.
“Protect The Education System”
Of course, whilst the large language model should be applauded for the possibilities of its fantastic intellect, the dangers it can have to education are obvious. Luckily, there are some people who are already putting their minds towards how we can reduce the threat chatbots pose to academia.
Ed Daniels is a 22-year-old student at the University of Bristol. After being required by his university course to create a project that integrates AI with education, Daniels decided to develop the software start-up AIED.UK with the help of a grant from the university’s start-up incubator, Runway.
Daniels has created AIED.UK with the hopes that it could help to “protect the education system”. The app can detect if an essay has been generated with the help of AI. This can, in turn, help to ensure there is no inequality in academic settings and help to “level the playing field”, as Daniels put it.
Daniels believes that using AIED.UK to prevent AI will be like “fighting fire with fire”. The software will detect a bot by spotting writing that seems too predictable to be a real human.
“Normal human writing and speaking don’t always use the most predictable word, so the technology in the app effectively notices that if it can predict which word is coming next, a bot has probably written it.”
More from Tech
- Robots To Give A Helping Hand In UK Health And Welfare
- TikTok Fined Millions After Latest Failing To Safeguard Child Users
- Out With The Old! Apple Slices Support For Dated Devices, Is Your Product In Trouble?
- Is Google Bard The Underdog Of The AI Chatbot Scene?
- Musk’s Battle Against The Bots: Twitter Boss Redefines Blue-Ticks
- Five Ways Tech Is Going To Change Healthcare In The Next Five Years
- 4 Capabilities of Data Fabric Architecture That Help Companies Protect Information
- Data Warehousing vs. Data Lakes: A Comparison of Key Features and Functionality
At Princeton University, another student has also had a similar idea. Edward Tian, another 22-year-old student, has also built an app to detect text written by ChatGPT.
This app is called GPTZero and Tian has assured its users that it can “quickly and efficiently” decipher whether ChatGPT or a human has authored a piece of work.
GPTZero uses two indicators to determine whether a piece of work has been written by a bot: “perplexity” and “burstiness.” If a bot is perplexed by a piece of text, then it has high complexity and is more likely to be human. If the text is more familiar to the bot, it will be because they have both been trained on the same data and therefore it is more likely to be AI-generated.
“Burstiness” compares the variations of sentences. Humans tend to write with greater burstiness, meaning they may write long and complex sentences alongside shorter ones, but AI sentences tend to be more uniform.
Whilst AIED.UK is still relatively new, GPTZero has been around for a couple of months and has already received great support and popularity.
In just the first week of its launch, more than 30,000 people had tried out the app. In fact, it became so popular that the app crashed and Tian was forced to modify it so that it can handle the greater web traffic.
Whilst neither AIED.UK or GPTZero can be said to be foolproof (they both work on assumptions of how humans and chatbots write), they are a solid starting ground and provide software which can be developed to become more and more accurate in spotting the bots.
Solutions To The Bot Problem
Daniels receiving his university’s start-up prize to fund AIED.UK and the immense support Tian’s GPTZero has received shows the large market bot-spotting apps already have. It seems as though apps similar to GPTZero and AIED.UK are already being viewed as the way forward in how society will protect our education system from the interference of bots.
Apart from the education system, there are also steps being taken elsewhere to stop bots from infecting important sources of information.
For example, there are innovations being made in how to stop AI bots from interfering in social media.
Recently, Elon Musk has been making headlines for how he plans to stop bots from taking over Twitter. He has described these efforts as “the only realistic way to address advanced AI bot swarms taking over” in an otherwise “hopeless losing battle.”
Musk has introduced AI-detecting technology on the platform so that information produced by bots can be spotted and given a warning sign. Furthermore, he has altered Twitter so that the only recommended posts you will see and the only votes that will appear in polls will be from verified subscribers. He hopes that by making users pay to be verified, people using bots will be deterred from becoming verified subscribers themselves.
Whilst not everyone is convinced by Musk’s methods of stopping the bots, his is at least a step in the right direction and show how we should be thinking about keeping social media safe in the future.
False news and information being spread on social media by AI could be incredibly damaging to society in the same way that writing essays using AI could be unfair in academia.
The solutions being thought of by people such as Ed Daniels, Edward Tian and Elon Musk are essential in keeping a level and fair playing ground in today’s society, and that of the future.
The damage AI bots could do if not kept in check could have a detrimental effect on several aspects of life. If it cannot be kept in check in the academic sector, must we even have to imagine a future where all students must (heaven forbid) even need to return to pen and paper?
The post Spot The Bot! Bristol Student Creates App To Stop Student Cheats appeared first on TechRound.