I was sitting with a friend and former crush that I hadn’t seen in years, when he starts telling me about this amazing AI programme. “It can do anything you ask it to”, he says, “For example, I am going to ask it to write a poem about you and I.” I began to awkwardly read a romantic poem about the long-lost love between us. He quickly snatched the phone and corrected the AI telling it we are just friends. Thanks, as if ChatGPT really cares. Later that night I wondered: if I used two female-sounding names would it also result in a romantic poem? It does not.
By 2025 Generative AI is expected to generate 10% of all data, whereas it currently makes up under 1%. With a market value of $8.12 billion dollars in 2021, by 2028 it is predicted to rise to $63.05 billion. Generative AI will change the way many industries function. To give a few examples: it will change the transport industry (by accurately converting satellite images into map view); the arts industry (by creating music, images and videos); and even security (through face identification at any angle in airports or other locations). To underestimate its impact would be to walk blindfolded through the future.
ChatGPT is a generative AI that is fascinating individuals everywhere. While it has some impressive abilities, we should not overestimate its capabilities.
Take the simple assumption made by ChatGPT to make the poem romantic. That is an example of a bias towards a heterosexual norm, and it is certainly not the only bias in the system. People forget that generative AI does not come from nowhere, AI must be fed data by someone. Chances are, that data comes with biases. Of course, this assumption only caused some mild discomfort, but the potential for damage when a system is riddled with biases is immense.
Just as Google has false information on it, so too does ChatGPT. The difference between the two is the presentation. ChatGPT presents the answer as definite advice. With search engines such as Google, we can see the source from which we are receiving this information and make judgments as to its reliability accordingly. With ChatGPT this is not the case. Perhaps to ensure individuals do not overestimate its capabilities, companies that provide such services should be more transparent about their sources of information.
While overestimating the capabilities of services like ChatGPT is dangerous, a bigger worry is the production of fake data which cannot be differentiated from real data. A study into the business misuse of generative AI raised the concern of determining ownership over creative content. Ownership issues may present themselves where a book is made using generative AI software designed to help ordinary people make books but trained on the design of books created entirely by humans. Do the original authors of the books the AI was trained on own the ideas or structures? Does the AI company have any ownership?
Another issue may arise if fake videos are created using generative AI that show an individual carrying out criminal behaviour. Where high quality fake media can be easily produced how does one determine what is real and what is not, and how will the criminal justice system deal with this? Should people perpetually record themselves to have evidence of their past actions? Or should widespread surveillance with verified provenance arise as a response to these issues? The answer to both of these suggestions should be no.
While these suggestions could ensure we differentiate between AI generated data and real data, they would likely erode individuals’ privacy and potentially threaten our human rights. Unfortunately, the implementation of those suggestions is a very real possibility unless the governments act quickly to establish legal protections. With the increased prevalence of generative AI software, these issues will become more and more common as the production of these videos becomes cheaper. Investment into technologies which specialize in detecting the difference between AI generated data and real data will also be essential.
Generative AI is a tool, a very powerful tool, which can be used for both positive and negative ends. Once again, it is up to standards bodies and organizations to protect individuals and these protections must be established proactively, not in retrospect.
Written by Celene Sandiford, smartR AI