Nopaque, through consultancy, delivery and product, is helping organisations meet their goal of solving their customer challenges at the First Point of Contact (FPoC).
It's impossible to talk about Prompt Engineering and avoid a mention of the new buzz phrase of "Generative AI". No matter, it's here to stay and the potential it has isn't even fully understood yet. You wouldn't think constraining it would be the first topic we mention, however that's essentially what's needed to hone the benefits you can gain from using. Many of the available models like Bard, GPT-3/4 and Llama 2 are built from immense collections of data. One thing to note, is that they can contain the good and the bad. An unconstrained usage of a service in your customer interactions could have damaging and lasting impact on customers.
You could be just starting out, or an advanced consumer looking into building your own, however you still need to consider the intended outcome of using this in the context of positive results and experience, for your customers or your employees.
Nopaque have experience with LLMs, having used them to integrate into the products we are building, projects we've engaged with and the creation of content relevant to the clients we serve. The value within LLMs is accessed through prompts and this is where we need to look at how to constrain them. Have a read of our blog on prompting to learn more through a real word example of how to create prompts. While this is a simple example, we think it gives the reader a good basic understanding of constraining what the model responds with, and how it can be used in utilities like discerning what a customer is contacting you for.
There are a myriad of other examples for it's use. If you're considering plugging this into your customer journeys, feel free to get in touch and talk about what you're trying to achieve.