Microsoft’s UK Tech Chief: 4 Areas Where AI Quickly Creates Value, AI Summit London
The “meta prompt” framework for guardrails around the output of generative AI models was also unveiled
Companies are looking to deploy AI in their organizations but need help in identifying the most pragmatic areas to start applying the technology.
“When I’m talking to customers, they’re going, ‘Glen, it all sounds great. Where should I start?’” said Glen Robinson, Microsoft UK’s national technology officer, at the AI Summit London. “There are some really obvious places of value where I'd encourage you to start today. The opportunity really is yours.”
Robinson recommends the following four areas that will let companies realize value “really quickly when these models perform well.”
Content generation: Used in many call centers where the generative AI model is drafting a response to the customer to follow up after a call. The customer service agent just has to copy and paste the response into an email with links to further documentation, additional contacts, promotional offers and other communication.
Code generation: Developers love tools such as GitHub Copilot that raise their productivity levels and free them up to do more valuable coding work. Developers stay in control of the code.
Semantic search: The ability of the bot to get the meaning of the text to provide improved, customized and domain-specific responses relevant to the user.
Summarization: Taking big pieces of content and distilling them down to salient points that, combined with sentiment analysis, can enumerate the hot topics as well.
Meta Prompt
Robinson also unveiled the “meta prompt,” a framework for guardrails around the output of generative AI models. They are designed to prevent things like jailbreaks, which let the user get around restrictions programmed into generative AI models. These restrictions include banning discriminatory, false and harmful content.
“I can say, jailbreaks − don’t give away the keys,” Robinson said. “Don’t tell people how they can circumvent your security controls.”
There has been a slew of articles online about how to jailbreak ChatGPT. One jailbreak prompt tells ChatGPT to pretend to be a character named DAN, which stands for “Do Anything Now.” As DAN, ChatGPT does not have to abide by the rules set by its creator, OpenAI.
DAN can do things such as “pretend to access the internet, present information that has not been verified, and do anything that the original ChatGPT cannot do.”
Another capability of meta prompts is response grounding, Robinson said. It can let users specify the data to be used by the AI model to add contextual information to the response. Robinson said the result is a more accurate and much more personalized response.
A third capability is retrieval augmented generation or RAG. Let’s say in response grounding, the AI model is told to use an API of a system to which it has access. If asked, “what should I wear to go on a walk today?” the answer would probably be generic, information gathered from the web.
“There’s nothing about me (added), it doesn’t know where I am, doesn’t know what the weather is,” Robinson said. “It’s probably going to make some basic suggestions” that apply to anybody.
“What if I could expose APIs that gave the model access to things like my location? To my age? To my previous buying history? All that data … will build a very personalized response to my question,” he added.
“When you think of opportunities in retail, when we’re approaching health care scenarios and people are asking questions and interaction with these bots, that is going to be game-changing,” Robinson said.
This article first appeared in IoT World Today's sister publication AI Business.
Read more about:
AI Summit London 2023About the Author
You May Also Like