Lesson 18 of 123
M0 - AI Fundamentals

What is prompt engineering?

3 min read • Video coming soon
Sign in to track progress

At this point many AI guides will go into excruciating depth about all the things ChatGPT can do for your business.

I could do this but it’s really quite dull and not very helpful.

The better question is what can’t ChatGPT do?

It’s better to think of ChatGPT as a hyper intelligent new employee that you’ve just brought onto your team.

Let’s call them Chad.

Chat is an absolute whizz in many fields - you hired them because of the genius levels of intellect they showed in their interview, whizzing through standardised questions with top scores across the board.

Their first day in the office you give Chad a big task: write up the company’s annual report for investors.

Because Chad is so darn smart you know they’ll have zero problems with this task.

A day passes and Chad comes back with the report. Wow, so fast! Think of all the amazing tasks Chad can do for you next.

You read the report and your face drops.

It’s generic tripe.

The formatting and structure are correct and a lot of language sounds right.

But the actual content has nothing to do with your company, your company, the work you do or any of the values you promote.

Now comes a divergence between how we treat ChadGPT (the human) and ChatGPT (the AI).

With Chad the human you’d probably realise that a lot of the fault lies with you the boss. You gave Chad a task but with no context.

He didn’t know about the company, customers, normal work or values. He doesn’t have that knowledge so how in the hells could he incorporate it into the report?

Lacking that context he simply put together the best report he could. It looks and feels right but is missing the actual specific context that makes a good report.

We’d sit with Chad and make sure he has all that context.

We’d give him past annual reports as examples. We’d walk him through our values. We’d bring him up to date on our customers and the work we do for them all.

His next report would incorporate all of this vital context. Mixed with his high intelligence it turns out to be fantastic.

Contrast this to how we treat ChatGPT, an AI.

We ask for an annual report and then get annoyed with it when its results are mediocre.

We don’t give it the benefit of the doubt like we would a human - we assume that it’s the AIs fault not our own.

In truth it’s very likely operator error.

Like with ChadGPT if we give ChatGPT the correct context and information it’s going to do a much better job.

Doing so requires feeding in all of that information up front and having ChatGPT use it in its output.

Once this is realised what we can use ChatGPT for really opens up.

We stop relying on ChatGPT to get everything right first time. A human would not and nor can an AI. Instead we learn the skill of how to give ChatGPT what it needs to best answer our questions.

This is a learning process. For us. Not the AI.

This skill is called prompt engineering - I’ll discuss it in a little detail below but you don’t need to know the ins and outs.

Let’s cover some definitions though so we are all on the same page!

Basically prompt engineering is the skill of talking to and interacting with AIs.