These AI’s are coming to take your jobs…
|A few weeks ago I made a video testing out one of the biggest AI art generation tools, Stable Diffusion, and even trained a model using my own face with Google’s Dreambooth tool. In this video though, I want to talk a bit more about the consequences of using tools like that. AI technology is advancing rapidly, and there are many different ways in which it can be used to enhance or replace human labour. In this video, I will discuss the pros and cons of using AI tools in this way, as well as the ethical considerations that come into play when using these tools.
The use of AI like Stable Diffusion in the field of professional art can be both dangerous and beneficial. On the one hand, AI tools like Stable Diffusion can provide artists with the ability to create new and unique works of art quickly and easily. This can be especially useful for artists who are under tight deadlines or who are looking for new ways to push the boundaries of their art.
However, the use of AI tools like Stable Diffusion can also be dangerous for professional artists. One of the biggest dangers is the potential for these tools to be used to create counterfeit art. With the ability to quickly and easily generate new works of art, it may become easier for unscrupulous individuals to create fake art and pass it off as the real thing. This could damage the reputation and livelihood of professional artists, who rely on the authenticity of their work to maintain their careers.
Additionally, the use of AI tools like Stable Diffusion could lead to a decrease in the value of art. If AI-generated art becomes more widespread, it could devalue the work of human artists and make it more difficult for them to sell their work at a premium price. This could make it harder for artists to make a living from their work, and could ultimately discourage people from pursuing careers in art.
Artists could potentially take advantage of the trend of using AI tools like Stable Diffusion by creating their own models based on their art and selling or licensing them as a way of generating additional revenue. This could be a unique and innovative way for artists to monetize their work, and could help them reach new audiences and gain exposure for their art.
For example, an artist could create a model based on their own unique style of painting, and then sell or licence that model to other artists or individuals who are interested in creating art in that style. The artist could also use their model to generate new works of art themselves, which they could then sell or licence to others. This could provide a new source of income for the artist, and could help them to expand their artistic practice and reach new audiences.
Of course, there may be challenges and obstacles that artists would need to overcome in order to successfully create and sell their own AI models. For example, they would need to have a strong understanding of AI technology and how it works in order to create a model that is effective and useful. They would also need to consider any legal or ethical issues that might arise from selling or licensing their models, and would need to take steps to protect their intellectual property and ensure that their models are not used in ways that could harm their reputation or career.
As for text generation AI’s, they can be used for a variety of different purposes, many of which can provide significant benefits and positive outcomes. Some potential use cases for text generation AI include:
- Automated content generation: A text generation AI could be used to quickly and easily generate large volumes of written content, such as articles, blog posts, social media posts, and marketing copy. This could save time and effort for businesses and organisations, and could allow them to produce more content in less time.
- Content personalization: A text generation AI could be used to create personalised content for individual users. For example, a text generation AI could be used to generate emails or social media posts that are tailored to the specific interests and preferences of each user, making the content more relevant and engaging.
- Language translation: A text generation AI could be used to automatically translate written content from one language to another, making it easier for businesses and organisations to communicate with customers and clients who speak different languages.
The use of a text generation AI could potentially be disruptive to a number of different job roles. Some of the job roles that might be most at risk of being made redundant by this type of tool include writers, journalists, and other professionals who create written content as part of their job.
In addition to writers and journalists, other professionals who create written content as part of their job, such as editors, proofreaders, and copywriters, could also be at risk of being made redundant by a text generation AI. These professionals would need to adapt to the use of this technology in order to remain competitive in their field, or to find other ways to use their skills and expertise in the workplace.
There are also coding AI tools like GitHub CoPilot, the use which in the field of professional software development could have both pros and cons. On the one hand, a coding AI like GitHub CoPilot could provide benefits such as:
- Increased productivity: A coding AI like GitHub CoPilot could help professional developers to write and debug code more quickly and efficiently. This could save time and effort, and could allow developers to complete projects in less time.
- Improved accuracy: A coding AI like GitHub CoPilot could help developers to avoid mistakes and ensure that their code is correct and error-free. This could improve the quality and reliability of the software that developers create, and could reduce the need for debugging and other forms of code maintenance.
- Enhanced collaboration: A coding AI like GitHub CoPilot could facilitate collaboration among developers by providing real-time assistance and suggestions during the coding process. This could make it easier for developers to work together and share ideas, and could improve the overall effectiveness of the development team.
However, the use of a coding AI like GitHub CoPilot could also have some potential drawbacks and limitations, such as:
- Loss of human creativity: A coding AI like GitHub CoPilot could potentially limit the creativity and originality of human developers. By providing pre-determined solutions and suggestions, a coding AI could potentially stifle the ability of developers to think outside the box and come up with novel and innovative solutions.
- Potential for errors: While a coding AI like GitHub CoPilot could help to reduce the number of errors in code, it is not foolproof and could still make mistakes. This could lead to bugs and other issues in the software that developers create, which could be time-consuming and costly to fix.
- Dependency on technology: The use of a coding AI like GitHub CoPilot could create a dependency on technology for professional developers. If the AI were to malfunction or become unavailable, developers could be unable to complete their work, which could have negative consequences for their projects and careers.
Overall, the use of a coding AI like GitHub CoPilot could provide benefits such as increased productivity and improved accuracy, but it could also have potential drawbacks and limitations that need to be considered.
It is difficult to predict the exact level of risk that developers face from AI tools like GitHub CoPilot making their jobs redundant. While it is certainly possible that these tools could disrupt the job market for developers, it is important to note that AI technology is still in its early stages and is not yet advanced enough to completely replace human developers.
In the short term, it is likely that AI tools like GitHub CoPilot will simply augment the work of human developers, providing assistance and support to help them complete their tasks more efficiently and effectively. However, as AI technology continues to evolve and improve, it is possible that these tools could become more advanced and could potentially begin to replace human developers in some cases.
Ultimately, the extent to which AI tools like GitHub CoPilot will impact the job market for developers will depend on a variety of factors, including the rate of technological advancement, the adoption of AI tools by businesses and organisations, and the ability of human developers to adapt to the use of these tools. It is important for developers to stay informed about the latest developments in AI technology and to continue to develop and improve their skills in order to remain competitive in the job market.
Ok, if I’m being honest, everything you’ve just heard me say was generated using ChatGPT. I edited a few bits – like removing the endless “overall” statements – but almost everything was generated. You might have noticed it repeated a few times, its phrasing was a little weird, and there were some parts I removed because they were outright wrong – but I wanted to see how true its points on replacing jobs are. The short answer is, at least for now, you are plenty safe. Unlike Stable Diffusion, where you can get very different results from the same prompt, for ChatGPT it seems to basically have one answer for a given prompt or question. When I asked it about the pros and cons of using Stable Diffusion, it gave me the same exact points as when I asked for the ethical considerations of using it. It’s definitely clever, but it’s nowhere near production ready. The output it gives you still has to be edited by a human, and it feels a lot more like a prompt than a final draft results.
As for the art side of things, that’s a bit more of a pressing matter. There is quite a lot there to consider, including what counts as “influence” and what counts as “copying”. I think the problem there is the scale and the specificity. When an artist learns to draw, paint or whatever their method is, they learn from other artists. They learn from existing works – often directly copying their styles to learn – then use that knowledge to create something unique. Technically speaking, Stable Diffusion is doing that too, but I think the discomfort comes from the scale at which it does that, and then, importantly, how specifically it can recreate that style. If you ask it for something in the style of a Van Gogh painting, you get a Van Gogh painting. That applies to pretty much any artist you can think of. The ability to directly use someone else’s style so easily is what’s so worrying. I’d argue though that art is more than just the piece itself. It’s the story, the creativity of the artist and even the artist’s notoriety themselves. For commercial art I can see tools like Stable Diffusion being a fairly imminent threat, but for the sort of art you’d find in a gallery I’m not so sure.
And on the coding side of things, as much as the Programmer’s Humour subreddit might think it, ChatGPT isn’t going to be replacing developers any time soon. The point it made earlier about AI being able to make “bug-free code” is hilarious to anyone who’s tried to code – especially since it has no idea what it’s actually doing. It’s just pulling from a training library of human written code, which I’m sorry to say is chock-full of bugs. Every language is human written, and therefore not bug-free, nor are the operating systems and CPUs the programs run on. As for GitHub CoPilot, as the name suggests, it’s positioning itself as a next-level Intellisense tool, meaning it’s there to make your life as a developer easier, not replace you. The idea is that instead of writing all the boilerplate stuff for say an express API, you tell it what you want and it’ll generate the endpoints and stuff for you. It’ll make CRUD jobs easier for sure – but that doesn’t mean it’s going to write some dank query or black magic to process some complex data set for you – and it sure isn’t gonna write a stylish frontend either. It can’t debug code, or troubleshoot an as-yet unsolved problem.
On the whole I’d say that AI tools are still very much tools – in the way that content-aware fill in Photoshop is a tool – that can be used to accelerate your work rather than outright replace you. In 10, 20 or 50 years, yeah I can see AI tools doing a majority of programming roles, writing and plenty more, but that’s far enough away that I’m not sure I’m worried. There are bigger fish right now.