AI Prompt Engineering - the next must-have IT skill?

AI Prompt Engineering - the next must-have IT skill?

A look at prompt engineering as a new key skill for IT professionals - key concepts involved, learning resources, and impact to the IT workforce overall

2023’s Generative AI boom introduced lots of new lingo into our vocabulary, and ‘Prompt Engineering’ is one that intrigues me. It refers to a new skill/discipline, one vital to success with GenAI tools, but non-existent just a few years ago.

In 2024, companies everywhere are now racing to train employees on this emerging area, to help them maximise benefits that GenAI can offer.

So whats a prompt and why must we engineer one?

In context of AI, a prompt is text that instructs an AI model to generate a response. In the current generation of GenAI tools, this text prompt has become the ‘user interface’. Prompt engineering (or prompt design) is the skill of designing prompt text that produces the most relevant responses from an AI model.

Prompt engineering relates heavily to current Large Language Models (those that power Copilot, ChatGPT and others) and their ability to use in-context learning. This learning is temporary within a chat session, but if used effectively can shape the outputs through carefully designed prompt statements.

  • Poorly crafted prompts may lead to underwhelming answers, or many iterations to receive a valid answer
  • But a good prompt engineer can shortcut this process and receive greater outputs from the same model/tool

Who should learn prompt engineering?

According to some, the current answer is ‘everyone’. GenAI is showing signs of being as pervasive as smartphones or web searches, and hence using it effectively is no longer only a tech skill. While experts predict Prompt Engineering to be a transition skill (one eventually replaced or becoming as commonplace as web search engine usage), we could see a surge of demand for it over the next few years.

Here at Avanade, our L&D team created a custom Prompt Engineering course that will become mandatory for all employees (IT, Sales, HR, etc). It’s an hour of hands-on training that teaches basic concepts and puts them into practice. We designed it to fit our needs, but if you’re looking for something pre-packaged then here are some of the better courses from 2023:

Getting Started on Prompt Engineering with Generative AI - Pluralsight

Prompt Engineering for ChatGPT - Coursera

Key concepts to learn

Adjusting writing styles and personas

Many people don’t realise that your prompts can influence the style of responses from GenAI tools. If you need to see answers from a different perspective, try adding ‘Act as a' to the start of your prompt and watch the outputs change. Personas could range from a university lecturer to a six-year-old child, retail customer, or anything else you can imagine!

Output formats and levels of detail

In a similar way, the right prompt can control several other elements of output format. That includes asking for results in table form, bullet points, or even specific syntax like JSON/CSV. Even ask it to include specific columns or details that were missed. Remember that the more specific your request is the better the AI model will be able to serve it.

Using session context and follow-up questions to refine results

Remember that current AI models retain context from previous messages in your chat session, so use this to your advantage. If initial responses aren’t hitting the mark, ask the tool to refine them based on follow-up information you provide. You won’t need to provide the original input each time, as this is automatically included in follow-up prompts. Not sure what extra details to provide? Try asking the model itself what information it requires to fulfil a particular task!

Understanding hallucinations

If you’ve tried ChatGPT or Copilot then you’ve likely experienced what’s called a hallucination. It refers to any output inadvertently produced by an AI model that is inaccurate or misleading. These can look very convincing, but are an unfortunate consequence of the model design. Most AI models today combine multiple data sources and lack the ability to reason or correctly interpret data at times. The most reliable way to limit hallucinations is to ground a model in specific data sources for your scenario, a technique known as Retrieval Augmented Generation (RAG). Yet this requires development effort to implement, putting it out of reach of non-technical users for now. An alternative is to stay alert and use your own brain to fact-check or verify data points produced by any AI tool.

Impact to diversity in tech

What’s interesting about this area vs other traditional tech fields is that it requires a strong understanding of written language and other soft skills, combined with analytical thinking. These soft skills are not always strengths of software developers or data engineers/scientists, so it could open the door to a new diverse workforce with fresh perspectives on building software.

Some of the initial theory may be daunting to those without IT backgrounds, but those who persist may unlock high-demand career paths. So what are you waiting for – will this be on your ‘learning list’ for 2024?