There’s a lot of interest around AI in recent months due to the excitement over ChatGPT and the various image and content creation algorithms.
It’s interesting technology with great potential to disrupt, but once we’ve all enjoyed playing with it, will it become part of the tech backbone alongside Blockchains and the Cloud?
And if so, at what point do we revisit the ethical and moral impact of this? Just as the “fake news” problem led to a need to certify content for accuracy, will we need to verify human created vs AI generated content?
For instance, what happens if ChatGPT is used to create legal documents to avoid expensive solicitor? Will they stand up in court? Will content creation businesses all collapse as AI takes over to churn what we see on our feeds every day or will they evolve to meet the new challenges?
Who’s Creating What?
We’re already hearing of the problem universities are having in grading coursework due to the difficulty in identifying ChatGPT generated content vs student written content. This had led to the idea of degrees only being graded from exams, which is a blow for those who aren’t great with exams.
Only time will tell if ChatGPT is banned or regulated, or will this all blow over once the hype has faded?
Here for the journey!
It’s going to be an interesting journey to watch and will open some very deep conversations, but that’s what progress is all about and I’m looking forward to it.
And before anyone asks, this post was not written using ChatGPT!