PlagiarismCheck’s research revealed that Google AI chatbot Bard can generate plagiarism

London, UK, May 31, 2023, ZEXPRWIRE, AI generators can provide an output based on your prompt by taking separate words from billions of sources. PlagiarismCheck.org, a Ukrainian startup that specialized in EdTech software started testing their AI Detector on Bard texts, they discovered a significant concern. The PlagiarismCheck tool finds from 5% to 45% of AI-generated content that can be considered as plagiarism.

The expert analyzed over 35 texts, and plagiarism above 5% was found in 25 of them. The investigation by PlagiarismCheck was conducted on May 12-15 and shows that Bard has a very high similarity of texts to already existing ones.

“We tested Google’s Bard AI model and found it generated to 45% plagiarism simply by paraphrasing existing content. AI models should generate unique text and should not allow plagiarism”, – says Language Analyst Natalie Voropai.

PlagiarismCheck experts add that perhaps the percentage of plagiarism also depends on the complexity of the request. Sometimes AI simply compiles widely available information on a topic. If data is lacking, the AI generates a higher-level text with a lower probability of plagiarism. Later, they will test this hypothesis in more detail.

The full version of the research is available here: https://www.youtube.com/watch?v=l7K4n5FoxfA

Garrett Baklytsky, Head of Product at PlagiarismCheck.org says:

“At PlagiarismCheck, we help to keep content original by checking text for similarity and authorship over 8 years for hundred institutions and organizations. And the last few months, we’ve also been checking for AI-generated text present. We were very surprised when we discovered that not only our AI Detector can spot text generated by Bard, but also our Plagiarism Checker finds similarities between texts generated by this Google AI сhatbot and existing content.

Usually, large language models take separate words from various sources and combine them in meaningful sentences. These models do not take sentences from sources: they generate their own sentences based on their understanding of the context and available texts from various sources they learned on. So it was very unexpected to see that sometimes Bard generates sentences, that are very close to being a paraphrased version of an original sentence that is stored somewhere in a source.

We noticed that these matches of paraphrased or sometimes even exactly matching content, sometimes reaching up to 45% of similarity of the whole text generated by Bard. Of course, any similarity tool can provide you with a list of matches with various sources, which must be checked carefully. Some sentences shouldn’t be considered as plagiarism because they may sound very far from how they sound in original sources. But we have seen the cases where the similarity is obvious”

Expert says that this can cause a lot of trouble for users. Not only because of using AI-generated text, which is prohibited for students, for example. But also because of possible accusations of plagiarism. Many educational institutions do not accept papers that contain 10% or even 5% similarities, not even saying about papers that were generated by AI. Even if an institution does not have some AI Detector to check if a paper was generated by AI, a student still can be in trouble, because of possible accusations of plagiarism, when submitting a paper generated by Bard. 

It is now easier than ever to detect and remove plagiarized pieces and AI-generated texts that can cause damage to reputation or brand. The new tools address a serious threat and allow AI assistants to be used safely.

report example

The PlagiarismCheck tool determined the percentage of similarity overall, flagged identical matches, and changed text. The solution also added detailed reports and clickable links to the sources from which the text was plagiarized or paraphrased.

Bard has been noticed for plagiarism before. Previously, accusations of plagiarism were limited to the lack of accurate references to sources and the attribution of research authorship in general. Online publishers are concerned that AI will continue to use their content without proper accreditation, which could reduce traffic to their sites and ad revenue. Online platform owners are also unhappy that their content was used to train chatbots without any compensation.

This challenge can be overcome, thanks to the development of AI detection software. Plagiarism is a threat not only to academic integrity but also to the reliability of texts in general and content uniqueness. AI detection tools allow us to protect businesses and minimize risks: by using them to ensure high-quality, verified content.

About PlagiarismCheck.org

PlagiarismCheck.org is an online software that helps educational institutions and individuals check text originality, verify authorship, and improve writing. The tool is able to scan web pages and academic databases to identify any instances of plagiarism. With 200 000+ users and 8 years on the market, PlagiarismCheck.org ensures the integrity of their work and maintains high standards of academic and professional ethics.

Learn more at https://plagiarismcheck.org/ 

Disclaimer: The views, suggestions, and opinions expressed here are the sole responsibility of the experts. No Economics Bot journalist was involved in the writing and production of this article.

Vehement Finance News Network