Get your free essentials of employment low manual

The Complex Conundrum of Using ChatGPT at Work

AI has presented many challenges to the workforce globally, from causing layoffs to placing additional pressure on companies to reskill their workforce, but one of the oft-ignored concerns is the use of ChatGPT at work. Glassdoor has been consistently tracking the ChatGPT adoption rates ever since the AI came out and their numbers indicate that the use of the AI tool has gone up from 27 percent among the early adopters to 62 percent of professionals as of November. The use of AI tools at work can be harmless when it’s company-enforced and there is updated software that is highly secure and protects the company data. The unrestricted use of ChatGPT at work, however, can bring about a lot of ethical and safety concerns along with unexpected legal concerns for the company.

The numbers from the study indicate that marketing has the highest ChatGPT adoption rate at 77 percent, followed by COnsulting at 71 percent and advertising at 76 percent. The use of ChatGPT at work was slightly higher among men (66 percent vs 57 percent) and Gen Z were the most popular users at 66 percent. This was found to be only a little higher than millennials at 63 percent and Gen X at 57 percent. Regardless of industry, ChatGPT is finding room for itself in many a career. 

Are We Okay With Using ChatGPT at Work?

If the ChatGPT adoption rates are anything to go by then it’s quite clear the AI tools do simplify work and are popular for a reason. Beyond the initial struggles of familiarizing yourself with an AI tool lies an extensive number of possibilities that can bring you new ideas and change the nature of your work entirely. The slow but certain rise of prompt engineering as a profession will tell you that becoming an expert on AI is becoming increasingly important. While many of these engineers work to refine the AI tool itself and search for errors and flaws in the system, they are equally capable of drawing out the best possible results from generative AI at work. With the rise of such niche yet versatile professions, you might start to wonder about the implications of tinkering with ChatGPT at work yourself. 

The Complex Conundrum of Using ChatGPT at Work (2)

Image – Freepik

Generative AI at Work from a Company Perspective

It’s hard to put a finger on the extent of the AI tools’ benefits and uses that companies are seeing globally because AI tools are not as clearly defined as one might think. Generative AI that can answer your questions are one thing but there are also management software that can gather, summarize, and analyze data in order to simplify your job before you get started putting the data to good use. HR jobs become quite a bit more streamlined through the use of such AI tools, freeing them from repetitive tasks to focus on improving the work culture and productivity within the workplace.

Speaking of productivity, AI tools are also quite efficient at making a single employee do more within a specified amount of time as compared to their work without it. Some use ChatGPT at work to summarize documents and generate briefs more easily by themselves. Other AI productivity tools go deeper and help with aspects like replying to emails, suggesting action points, scheduling events, automating tasks, and overall freeing up the employee to do more useful things with their time. 

Some generative tools at work are being applied within an organization, where the tools access native organizational data and provide insights, create chatbots to interact with customers, regulate data access for employees, and act like a real live assistant who knows the ins and outs of the company. Are there many benefits to AI tools? Most certainly. And yet it shouldn’t be applied freely just everywhere. 

Unregulated Use of ChatGPT at Work

The company-sanctioned use of ChatGPT at work can definitely boost a company and these benefits even stack up over time as employees get more familiar with using the tool. The problem comes with overusing ChatGPT and other free tools to get large chunks of work done without verifying anything about its generated content. The New York Times recently sued Microsoft and ChatGPT for providing their paid content for free through their AI chatbots and for often providing content from their researched articles without citing them as the reference. Imagine using content from ChatGPT assuming it’s original because the AI “generated” it but then getting sued because your work plagiarized the work of some other group by mistake. 

A lot of the content these chatbots provide is presented as is from the internet sources that the chatbot was trained on, making it risky for your company to use. Even business ideas, creative ad concepts, etc. might be generated from existing information that you might get into trouble for copying. Companies that are unaware that this is where their employees are sourcing their data from have no way to prepare for the consequences either.

It is also often observed that the content generated by AI is often without debt and what you’d find if you opened the first few links on any topic you wanted content on. Without true creativity and the human touch, ChatGPT at work can affect the quality of the work that is done within the organization. The worst part of using ChatGPT at work is that if an employee is caught using it to do their work for them, such as writing an ad copy or generating ideas, then it could seriously damage their relationship with the company. Now if a company is caught using generative AI at work without disclosing that their content is AI generated, it could permanently damage their reputation with their customers. Obviously, this is a much bigger issue for news platforms or literary agencies but it could affect anyone in any industry, making the customer doubt your organization’s experience and actual knowledge regarding the task at hand. 

Safety Concerns With Using AI at Work

The safety issues of using AI tools at work are an entirely separate discussion on their own and can pose serious risks to an organization. If an employee feeds the transcript of a business meeting into a public chatbot to summarize the minutes of the meeting, they are essentially giving that data to the AI to learn. All of that information becomes more vulnerable to individuals who exploit the security gaps in the tool. Having sensitive data leaked is never a good time. In May 2023, Samsung banned the use of ChatGPT at work along with other AI tools on any company-issued device after they found that employees had uploaded sensitive code to ChatGPT.

This doesn’t mean that all companies need to issue a blanket statement about the use of AI tools at work but what it does mean is that the adoption of something as unfamiliar as AI has to be done with care. If an AI management system or any similar tool is adopted by an organization, care has to be taken to ensure that there are sufficient safety checks in place and that all employees are trained to use the AI tools correctly. Without this, the use of generative AI can damage a company more than it benefits it. 

FAQs

Ava Martinez

Similar Articles

Leave a Reply

Your email address will not be published. Required fields are marked *