It’s all about trust.

That was the watchword at the recent Salesforce annual conference known as Dreamforce and emphasized the importance of trying to make sure that we can to some extent trust the use of generative AI, especially in an enterprise context. This is a crucial direction if generative AI is going to take hold in today’s AI-seeking enterprises.

Bottom-line is that you cannot sensibly rely upon generative AI unless you know relatively well that the AI will produce serious, reliable, and trustworthy business results.

The event started as is customary with an opening keynote by Marc Benioff, Chair and CEO of Salesforce, during which he hammered away at the existing trust gap associated with generative AI. Generative AI can produce impressive outputs that are stunning in their fluency, yet at the same time, those outputs can readily contain errors, falsehoods, biases, glitches, and so-called AI hallucinations, among many other adverse maladies.

Trying to discern whether the good stuff in a generated result also contains bad stuff can be daunting. A momentous concern is that the problematic and at times insidious compromises can easily get past someone using generative AI and they will unknowingly pass along generated results that are toxic or blunder-laden and potentially lead to unsavory legal and reputational repercussions. That is bad for the person using the generative AI. And bad for others receiving or dependent upon those outputs. Bad, bad, bad. Bad for business all told.

Okay, so we need ways to decrease the bad from happening, which in turn will bolster a sense of trust in making use of generative AI. I will come back to this consideration momentarily. Let’s first get back to the Dreamforce conference and some additional context on gaining trust around the wild and woolly generative AI of current times.

Generative AI has been gradually and incrementally made available in Salesforce so that customers can benefit from the wonderment that generative AI can provide. This AI infusion comes under the umbrella of Salesforce system components coined as Einstein (a name that leverages the public perception of the brilliance of Albert Einstein). Dreamforce’s big announcement this year along those lines consisted of the unveiling of an integrated Einstein 1 Platform. This essentially is a macroscopic wraparound of various AI-boosting functionality and features, including the Salesforce Data Cloud, Salesforce metadata framework, Einstein AI, Einstein Copilot, Einstein Copilot Studio, Einstein Trust Layer, and the like.

Upon Marc Benioff finishing his vibrant high-level keynote remarks, a handoff of the talk was made to Parker Harris, Co-Founder and CTO of Salesforce, for some more detailed elaborations on the matters at hand. Of those, a pointed elucidation covered the Einstein Trust Layer. That’s a particular piece of the pie that I aim to use as a launching pad herein to discuss prompt engineering advancement and the future of generative AI overall.

I tend to refer to a generative AI trust layer as an encasing that emboldens an outside-in approach to generative AI (see my prior coverage at the link here and the link here, just to name a few).

Allow me to explain.

This has to do with the building and fielding of elements associated with generative AI that will serve as a trust-boosting layer outside of generative AI. What we might do is attempt to surround generative AI with mechanisms that can help prod generative AI toward being trustworthy and failing that we can at least have those same mechanisms seek to ascertain when trustworthiness is being potentially forsaken or undercut.

I liken this to putting protection around a black box. Suppose you have a black box that takes inputs and produces outputs. Assume that you have limited ability to alter the internal machinations of the black box. You at least have direct access to the inputs, and likewise, you have direct access to the outputs.

Therefore, you can arm yourself by trying to purposefully devise inputs that will do the best for you, such that they will hopefully get good results out of the black box. Once you get the outputs from the black box, you once again need to be purposefully determined to scrutinize the outputs so that if the black box has gone awry you can detect this has occurred (possibly making corrections on the fly to the outputs).

Your sense of trust toward the black box is being bolstered due to the external surrounding protective components. The aim is that the stridently composed inputs will steer the black box away from faltering. In addition, no matter what the black box does, the additional aim is to assume that the outputs from the black box are intrinsically suspicious and need a close-in double-check.

If the maker of the black box can meanwhile also be tuning or advancing the black box to be less untrustworthy, we construe that as icing on the cake. Nonetheless, we will still maintain our external trust layer as a means of protecting us from things going astray.

Making Sure To Stay On Alert When It Comes To Generative AI

As the famous line goes, trust but verify.

The avenue for garnering trust in generative AI can be pursued in a twofold fashion:

  • (1) Boosting generative AI so that it is more trustworthy.
  • (2) Surrounding generative AI with protections to gauge trustworthiness and prod toward trustworthiness.

My prediction has been and continues to be that we are going to see both of those paths being pursued, though the first path will be slow and ponderous (i.e., trying to change up generative AI technologically toward heightened trustworthiness is a hard problem and a head-scratcher based on what we know today), while the second path can be faster to market and keenly cope with matters while that first path is winding along.

Here’s what I’ll cover in this discussion.

I am going to briefly introduce you to the Salesforce instance of their trust layer prompt engineering innovation and then will switch over to exploring how this is generalized and overarchingly representative of a trend that I have repeatedly described in my columns. In a sense, the Salesforce effort reinforces my exhortations about how prompt engineering is going to be changing and how generative AI will be impacted accordingly.

Thus, in today’s column, I am continuing my special series on advances in prompt engineering and generative AI to discuss an innovation that is exemplified via a particular product line but that will abundantly continue to emerge in a variety of additional guises by many other AI makers, AI developers AI researchers and other software firms. I am mainly using the Salesforce instance as a backdrop to illustrate that these are real advancements that have real-world value and will be indubitably devised and adopted for any astute businesses wanting to dutifully and mindfully leverage generative AI.

Before I dive into my in-depth exploration of this vital topic, let’s make sure we are all on the same page when it comes to the foundations of prompt engineering and generative AI. Doing so will put us all on an even keel.

Prompt Engineering Is A Cornerstone For Generative AI

As a quick backgrounder, prompt engineering or also referred to as prompt design is a rapidly evolving realm and is vital to effectively and efficiently using generative AI or the use of large language models (LLMs). Anyone using generative AI such as the widely and wildly popular ChatGPT by AI maker OpenAI, or akin AI such as GPT-4 (OpenAI), Bard (Google), Claude 2 (Anthropic), etc. ought to be paying close attention to the latest innovations for crafting viable and pragmatic prompts.

For those of you interested in prompt engineering or prompt design, I’ve been doing an ongoing series of insightful explorations on the latest in this expanding and evolving realm, including this coverage:

  • (1) Practical use of imperfect prompts toward devising superb prompts (see the link here).
  • (2) Use of persistent context or custom instructions for prompt priming (see the link here).
  • (3) Leveraging multi-personas in generative AI via shrewd prompting (see the link here).
  • (4) Advent of using prompts to invoke chain-of-thought reasoning (see the link here).
  • (5) Use of prompt engineering for domain savviness via in-model learning and vector databases (see the link here).
  • (6) Augmenting the use of chain-of-thought by leveraging factored decomposition (see the link here).
  • (7) Making use of the newly emerging skeleton-of-thought approach for prompt engineering (see the link here).
  • (8) Determining when to best use the show-me versus tell-me prompting strategy (see the link here).
  • (9) Gradual emergence of the mega-personas approach that entails scaling up the multi-personas to new heights (see the link here).
  • (10) Discovering the hidden role of certainty and uncertainty within generative AI and using advanced prompt engineering techniques accordingly (see the link here).
  • (11) Vagueness is often shunned when using generative AI but it turns out that vagueness is a useful prompt engineering tool (see the link here).
  • (12) Prompt engineering frameworks or catalogs can really boost your prompting skills and especially bring you up to speed on the best prompt patterns to utilize (see the link here).
  • (13) Flipped interaction is a crucial prompt engineering technique that everyone should know (see the link here).
  • (14) Leveraging are-you-sure AI self-reflection and AI self-improvement capabilities is an advanced prompt engineering approach with surefire upside results (see the link here).
  • (15) Know about the emerging addons that will produce prompts for you or tune up your prompts when using generative AI (see the link here).
  • (16) Make sure to have an interactive mindset when using generative AI rather than falling into the mental trap of one-and-done prompting styles (see the link here).
  • (17) Prompting to produce programming code that can be used by code interpreters to enhance your generative AI capabilities (see the link here).
  • (18) Make sure to consider target-your-response considerations when doing mindful prompt engineering (see the link here).
  • (19) Additional coverage including the use of macros and the astute use of end-goal planning when using generative AI (see the link here).
  • (20) Showcasing how to best use an emerging approach known as the Tree of Thoughts as a leg-up beyond chain-of-thought prompt engineering (see the link here).
  • (21) The strategic use of hints or directional stimulus prompting is a vital element of any prompt engineering endeavor or skillset (see the link here).

Anyone stridently interested in prompt engineering and improving their results when using generative AI ought to be familiar with those notable techniques.

Moving on, here’s a bold statement that pretty much has become a veritable golden rule these days:

  • The use of generative AI can altogether succeed or fail based on the prompt that you enter.

If you provide a prompt that is poorly composed, the odds are that the generative AI will wander all over the map and you won’t get anything demonstrative related to your inquiry. Being demonstrably specific can be advantageous, but even that can confound or otherwise fail to get you the results you are seeking. A wide variety of cheat sheets and training courses for suitable ways to compose and utilize prompts has been rapidly entering the marketplace to try and help people leverage generative AI soundly. In addition, add-ons to generative AI have been devised to aid you when trying to come up with prudent prompts, see my coverage at the link here.

AI Ethics and AI Law also stridently enter into the prompt engineering domain. For example, whatever prompt you opt to compose can directly or inadvertently elicit or foster the potential of generative AI to produce essays and interactions that imbue untoward biases, errors, falsehoods, glitches, and even so-called AI hallucinations (I do not favor the catchphrase of AI hallucinations, though it has admittedly tremendous stickiness in the media; here’s my take on AI hallucinations at the link here).

There is also a marked chance that we will ultimately see lawmakers come to the fore on these matters, possibly devising and putting in place new laws or regulations to try and scope and curtail misuses of generative AI. Regarding prompt engineering, there are likely going to be heated debates over putting boundaries around the kinds of prompts you can use. This might include requiring AI makers to filter and prevent certain presumed inappropriate or unsuitable prompts, a cringe-worthy issue for some that borders on free speech considerations. For my ongoing coverage of these types of AI Ethics and AI Law issues, see the link here and the link here, just to name a few.

With the above as an overarching perspective, we are ready to jump into today’s discussion.

Getting Back To First Principles About Prompting And Generative AI

Let’s get back to fundamental roots and the essentials of vital first principles when it comes to prompt engineering and generative AI.

Consider the age-old indication about the three keystone stages of computing:

  • (1) Inputs
  • (2) Process
  • (3) Outputs

Yes, I realize that seems almost like a simpleton contrivance. Bear with me. First, there are inputs that are fed into some computing process. The computing process then does whatever it does. Subsequently, the process opts to present or produce some outputs for us to see. Voila, those are the three major dance steps that underlie the conventional use of most computers.

Recast this in the realm of generative AI.

We have this:

  • (1) Inputs: Prompt composition and entry into generative AI
  • (2) Process: Generative AI app that does its thing
  • (3) Outputs: Results produced or generated by the generative AI

To show how quick-minded we are, let’s drop the labeling of stating that these are inputs, processes, and outputs, thus we’ll just say this:

  • (1) Prompt composition and entry into generative AI
  • (2) Generative AI app that does its thing
  • (3) Results produced or generated by the generative AI

I’ll add some shorthand labels to make this same indication more readable:

  • (1) Prompt: Prompt composition and entry into generative AI
  • (2) GenAI: Generative AI app that does its thing
  • (3) Prompt Results: Results produced or generated by the generative AI

As earlier stated, we might not have much control over the second step, the processing aspects. We are hoping that the AI makers of generative AI will strenuously pursue making their AI apps more trustworthy. We won’t though bet our whole lunch on that proposition.

We will meanwhile focus on the first step, regarding prompting, and we will also focus on the third step, the results being produced as outputs.

Begin with the first step, the prompting.

In a business setting, the odds are that when you enter a prompt, you might want to include pertinent business data. If you are asking generative AI to analyze the sales of a bunch of customers, you will seemingly need to include in your prompt the customer names and their sales figures. That seems like a necessity for conducting such an AI-based analysis and something anyone using generative AI for business purposes is bound to do.

There are some oopsies involved.

Suppose you have signed up to use a generative AI available on the web. Perhaps you grab your sales data from a handy-dandy online spreadsheet and paste the customer names and sales figures into your prompt. One question is whether the data that you just retrieved is going through an insecure or secure Internet connection. If the connection is insecure, you might have now inadvertently and unknowingly made available company data to potential hackers.

Not good.

You would be wise to use a secure connection.

Another issue might be that the spreadsheet containing the sales figures is a dated spreadsheet. In your haste to use the generative AI app, you mindlessly copied outdated data. The generative AI won’t likely care either way, since the AI app doesn’t particularly have a bead on whether the sales numbers are old or new. The outputs produced will be potentially sound as to analyzing the old sales numbers that you fed in via your prompt. Darned though that you failed to realize you had used outdated numbers and yet you opted to send along to the sales team the generative AI analysis. The sales team is confoundingly misled, not by the AI per se but by human error when establishing the prompt.

It sure might be nice to have a kind of solid grounding that when you use business data in your prompts, a double-checking occurs to ensure that the latest version of the data is being plopped into the prompt and not an outdated set.

There’s more to consider too.

The business data about your customers includes their names and their respective sales figures, something that normally would be of a proprietary nature to your company. You seemingly are willing to allow this proprietary data to go beyond your immediate grasp by feeding the data into a generative AI app. I’d suggest you ought to be queasy doing so. There is a chance that somehow the data might be intercepted or otherwise compromised in terms of secrecy.

Aha, to solve that dilemma, you might be able to mask the data and still do the analysis. For example, suppose you assigned numbers to each of the customers. You would not feed the names of the customers into the generative AI and instead feed the assigned numbers. This could reduce the chances of compromising the secrecy involved.

A recap so far is that you might want to compose a prompt that contains proprietary business data and do so via a secure data retrieval from some trusted data source, and you want to do a dynamic grounding such that the data is current. Before sending this data to the generative AI app, you want to mask it and ergo protect the proprietary nature, and you want to use a secure connection or gateway when you access the generative AI.

Are you with me on this?

I hope so.

Let’s continue.

One issue that many businesses face is that depending upon the licensing terms of the generative AI app, there is a chance that the prompts you enter into the generative AI allow the AI maker to potentially inspect and possibly reuse your entered prompt, see my coverage at the link here of the privacy and confidentiality dangers when using generative AI. If the data that you’ve used in the prompt is unmasked, this is quite a disconcerting affair. You are handing it over to the AI app and have no idea where it might go or how it might be used. The AI makers tend to argue that they need such data to further enhance their generative AI, allowing it to be further data-trained on the data that people enter into their prompts.

Despite that seemingly valued basis, the potential compromise of your data is something that would seem more important to you than aiding the AI app in being improved. I say this too even if you have masked your data. There could be other facets in your prompt that are proprietary. Furthermore, it could be that the masked data can be cleverly unmasked by a determined bad actor if they could somehow find the masked data now ingrained as part of the pattern-matching of the generative AI.

If possible, you might seek out a generative AI app that has explicit licensing regarding a zero-retention policy. The AI maker will be making a promise that they have devised or set up their AI app to not retain the prompts that you enter. Note that this is not necessarily ironclad in the sense that there is still a chance that the AI app might technologically reuse the data, but at least the AI maker has promised that this won’t happen, and you therefore would likely have a stronger legal case for claiming damages thereof (consult your AI-savvy attorney on such thorny legal matters).

Aiming To Make Those Generative AI Outputs Trustworthy

Shift next into an outbound mode.

Assume that the generative AI produces outputs. You are gleefully excited to see the results. Once again, the results ought to come to you via a secure connection or gateway, or else they could be compromised.

Before you can suitably read the results, you would likely want to make sure that any data masking is undone or demasked. The numbers that you earlier assigned to the customers could be translated back into their respective customer names. Doing so would make life easier for you when examining the output that you now have from the AI app.

Next, the outputs might have glaring issues or might have subtle and hard-to-discern issues. Some issues might consist of a narrative analyzing the sales data that includes comments expressed by the generative AI and are toxic. The AI app might indicate that a customer ought to be dropped. Why so? The explanation might be based on an inherent bias in the initial data-training of the generative AI. The person that entered the prompt might not have brought this out and instead, the internal pattern-matching of the AI app did so.

A toxicity detection of the results or output from generative AI can be carried out by using an automated tool that inspects the wording presented. The latest such tools often make use of AI, namely using a variation of generative AI to examine the outputs of another generative AI app, see my coverage at the link here. I would like to emphasize that these automated tools do not provide any surefire guarantee of catching maladies in the outputs. These tools can perform a helpful first pass. They are decidedly not a silver bullet.

Regrettably, some people get lulled into assuming that a toxicity analyzer is omniscient. Wrong. The matter is compounded by skipping any semblance of human scrutiny or a studious look-see of the output. Please don’t fall into that unsafe and risky trap.

A final thought for now about the three stages involved in using generative AI is that you might want to do tracking of what you used the generative AI for.

Businesses typically like to have auditable trails. An online and secure log should be kept. Who did what with the generative AI? When did they do so? What did they do? Etc. These are all crucial when trying to assess whether things are proceeding properly and also when trying to figure out what went wrong. It would be useful to have a log automatically kept about what occurred when the generative AI was being used.

The recap then for this last bit of discussion above is that you might want to have generative AI that abides by a zero-retention policy and won’t absorb or reuse your prompts, you want to have the outputs provided via a secure connection or gateway, you want to have the outputs demasked if they were earlier masked, you want to have the output examined for toxicity, and you want from a business-level perspective to have an audit trail kept of what has taken place when using the AI app.

Existence Proof Of Finding Value In Trust Layering For Generative AI

By and large, nearly all of those aforementioned elements can be done outside of the generative AI app.

You can surround the generative AI with a set or suite of elements that will take on those chores. There isn’t a need to necessarily find a generative AI app that internally includes those added protections (few do, as I’ll be explaining later, herein). Instead, you might find a third party that has put together those into a trust layer for you. The trust layer sits outside of the generative AI and serves as your protector or trust-inducing ally when using the generative AI.

A prime example of this consists of the Einstein Trust Layer as part of the Salesforce Einstein 1 Platform.

Per the materials presented at Dreamforce, here’s an abbreviated version of their Salesforce Trust Layer components and what they consist of (I’ve excerpted the descriptions from Salesforce online postings, please make sure to look online at the Salesforce website if you want the nitty-gritty details and the full depiction):

  • Prompt: “A prompt is a canvas to provide detailed context and instructions to Large Language Models.”
  • Secure Data Retrieval: “This lets you bring in the data you need to build contextual prompts. In every interaction, governance policies and permissions are enforced to ensure only those with clearance have access to the data.”
  • Dynamic Grounding: “Dynamic grounding steers an LLM’s answers using the correct and the most up-to-date information, ‘grounding’ the model in factual data and relevant context.”
  • Data Masking: “Data masking replaces sensitive data with anonymized data to protect private information and comply with privacy requirements. Data masking is particularly useful in ensuring you’ve eliminated all personally identifiable information like names, phone numbers and addresses, when writing AI prompts.”
  • Secure Gateway (for securely transporting data back and forth with the generative AI)
  • Generation (invoking a generative AI app)
  • Zero Retention: “Zero retention means that no customer data is stored outside of Salesforce. Generative AI prompts and outputs are never stored in the LLM, and are not learned by the LLM.”
  • Toxicity Detection: “Toxicity detection is a method of flagging toxic content such as hate speech and negative stereotypes. It does this by using a machine learning (ML) model to scan and score the answers an LLM provides, ensuring that generations from a model are usable in a business context.”
  • Audit Trail: “Auditing continually evaluates systems to make sure they are working as expected, without bias, with high quality data, and in-line with regulatory and organizational frameworks. Auditing also helps organizations meet compliance needs by logging the prompts, data used, outputs, and end user modifications in a secure audit trail.”

You can expect that these types of trust layer compilations will continue to expand and be further enhanced.

Businesses are beginning to realize that using generative AI without any notable encompassing trust layer is a potential disaster in the making. A raw or naked generative AI that is not encased or at least surrounded by a viable trust layer is a pure risk at the get-go. Enterprises that are in the know are right to expect that any use of generative AI must entail a trust layer encapsulation. Period, end of story.

As a former global CIO/CTO and someone who has many times led and fostered the adoption of CRM packages including Salesforce, along with having adopted AI-based systems overall, I can attest to the strident need for having trustworthy enterprise systems. Leaping into generative AI without a sensible and mindfully devised approach to turning the AI into a more trusted source and tool is foolhardy and will undermine even the most resilient of companies.

Some organizations have gone so far as to ban the use of “raw” generative AI throughout their enterprise simply due to the high risk associated with the omission of a trust layer. They have a solid point. Merely training their employees on how to suitably use generative AI that lacks a trust layer is not for the faint of heart. You’ll have employees who do not abide by the training; thus, they will end up inappropriately using generative AI. There will be employees that forget the training as it rapidly decays in their minds if not daily used. And so on.

The use of a trust layer provides some relief. By establishing sufficient automation to aid in the inputs and outputs of generative AI, there is a fighting chance to leverage generative AI at a more reasonable risk level. You don’t need to simply shrug your shoulders and believe that garnering the advantages of generative AI requires wide-open unmitigable risks. The layering of protective elements can substantially knock down the risks involved.

I do though want to clarify and stipulate that this does not imply a no-risk circumstance. One issue that some have with referring to “trust layers” is that the implication might suggest that trust is either on or off. You either presumably fully and blindly trust something or you don’t trust it at all. That is a classic confusion of assuming a false dichotomy. You have to think of this as a continuum of varying levels of trust, or perhaps, if you wish, varying levels of distrust.

A trust layer increases your willingness to trust and decreases your willingness to distrust. But that doesn’t mean that those go to zero. They don’t. Trust is a relative term.

The Future Of Prompt Engineering And Full-Blown Trust Layers

The days of merely entering a prompt and having it drop directly into a raw generative AI app are fading, especially for business use. Consumers are likely to continue using generative AI in the prompt-to-AI unfettered fashion. Even there, I’ve predicted that a market for prompt engineering trust layers for the consumer side of AI use is arising too. Businesses will be the first to go this route and be followed soon thereafter by consumers clamoring for similar protections.

You can also keep your eyes on lawmakers and regulators as they are bound to propose and potentially pass legislation and regulations that legally insist on such provisions. AI makers will be in the hot spot. Firms that adopt AI will be in the hot spot. Pressures to devise and employ generative AI trust layers are going to skyrocket. We are just in the early days now.

I’ll give you a quick taste of the myriad of layering components that are coming to the fore. After I list them out, we’ll do a review of what they consist of (also see my prior columns covering the details).

Let’s categorize these into the three-stage framework of Inputs, Process, and Outputs.

  • (1) Inputs or Prompts for Generative AI
  • (a) Pre-Prompt Persistence Settings And Instructions
  • (b) Prompt Compositional Guidance
  • (c) Prompt Initial Pre-Screening
  • (d) Prompt Toxicity Assessment
  • (e) Prompt Secure Data Retrieval
  • (f) Prompt Data Grounding
  • (g) Prompt Rationalization
  • (h) Prompt Data Masking And Anonymizing
  • (i) Prompt Final Refiner For Optimization
  • (j) Prompt Review With User (If Selected)
  • (k) Prompt Feeder For Secure Delivery To GenAI
  • (2) Process for Generative AI
  • (a) Validation And Verification Of Submitted Prompt
  • (b) Privacy And Confidentiality Pre-Screening
  • (c) Retention Policy Settings Compliance
  • (d) Anti-Hallucination Of AI Suppressants
  • (e) Self-Checks For Self-Generated Maladies
  • (f) Etc.
  • (3) Outputs or Prompt Results from Generative AI
  • (a) Prompt Result Security Evaluation
  • (b) Prompt Result Demasking
  • (c) Prompt Result Toxicity Assessment
  • (d) Prompt Result Transformation
  • (e) Prompt Result Presentation
  • (f) Prompt Result Feedback From User
  • (g) Prompt Result Analysis For New Prompts
  • (h) Overall Audit Trail Updating
  • (i) Monitor And Alert If Needed

Let’s start with the first stage entailing prompt preparations.

You’ll keenly observe that there are a lot of intervening steps between the composing of a prompt and its readiness for being fed over into the generative AI app. Some balk at this litany of steps. It is too much they decry. Users won’t stand for it. They are going to go rogue and try to go around the trust layers. The users will in dire exasperation aim to avoid a seeming bureaucratic gauntlet of checks and balances, merely when all they want to do is get a prompt underway.

We can easily take the air out of that argument.

First, envision this akin to going to McDonald’s to order a hamburger and fries. If I told you all the steps required to fulfill your order, you might lament that it is too much to simply get a burger and fries. The reality is that you don’t have to witness the step-by-step laborious procedure of having someone get out a hamburger bun, toast the bun, get out a hamburger patty, place the patty onto the grill, cook the patty, turn over the patty, remove the patty from the grill and place onto the bun, and so on. You are free of that confabulation.

The same holds true for the above-noted steps involving your prompt being made ready for being fed into generative AI. Most of those steps can take place behind the scenes. You enter your prompt, and the automation assembly line of trustworthy augmentation does the rest for you.

Second, we can potentially ensure that the only means of submitting a prompt to the chosen generative AI will be by having your base prompt go through the checks and balances. Depending upon how we’ve set up the connection and the pre-screening by the generative AI, it could be that only once a prompt has gotten the proper seal of approval will the GenAI of choice be accepting of the prompt. Your only recourse to use the generative AI will be to abide by the trust layer.

That being said, this is not some knucklehead means of blocking employees from simplistically using generative AI. It is instead an important realization that employees are going to be protected whether they want it or not, or realize it or not. Willy-nilly allowing employees to go around the protective mechanisms is a bad idea. A business would still undoubtedly be responsible for what its employees did. If you are going to protect the employees and the firm, you’ve got to establish a means to do so and stick with it. And, as noted above, most of the activity will occur behind the scenes and the employee has no semblance of the painstaking under-the-hood effort aiding them.

I’d like to go on a side tangent briefly.

One ongoing debate is whether or not the employee or user ought to be able to see the final readied prompt before being submitted to the generative AI app. There are tradeoffs in this conundrum.

If you don’t let the user see the readied prompt, they won’t know exactly what was fed into the generative AI. On the other side of things when the end result is presented to the user, they might be baffled as to how the output matches to what they believe they provided as their prompt. You see, there is a notable chance that the data retrievals, the masking, and the other transformations have turned the prompt into something afield of what the user intended.

Imagine the confusion that would reign. You type in a prompt. Easy-peasy. In your mind, you are expecting a likely output of some kind. The output appears. Oddly, it seems that the output is not at all what you intended in your prompt.

Anybody who uses generative AI avidly today knows what I am talking about. You sometimes need to do “debugging” whereby you study the output, reexamine your prompt, and have to make mental contortions as to how to modify your prompt to get what you really want. You are playing the role of a detective. What is your initial prompt led to the output that was generated? What can you change in that prompt to steer the generative AI toward an output that is closer to your preferences?

So, if the prompt that you know you entered is not the final prompt that was submitted, you are going to have quite the wits end battle of doing this debugging. You have to guess blindly about what transformations were made to your prompt. A stab in the dark occurs. You then shakily compose a new prompt, which again goes through transformations. Yikes, the output still doesn’t match your expectations.

Rinse and repeat.

Given that sad tale of woe, some insist that the user should be shown the final readied prompt before it is sent over to the generative AI app. This would allow the user to do a sanity check on the transformed prompt. Furthermore, there is no point in incurring the costs of having the generative AI process a prompt if the user can readily discern that the prompt is no longer suitable for being utilized. The costs of using generative AI can add up, especially if a lot of wanton back-and-forth is taking place when aiming in the dark with your prompts.

Okay, you might be thinking, just show the darned prompt to the user.

That answer has troubles. The prompt that you initially entered will now likely contain oddball-looking contortions. The masking might seem strange to you. The prompt might also have been reworded to try and garner optimizations out of the generative AI. All in all, the user is going to potentially be baffled by the finalized prompt.

In addition, the user might then try to distort their original prompt to get the transformations to come out in a manner that no longer is going to be helpful. By a mistaken belief that the prompt is perhaps wrongly composed after the transformations, a game of cat and mouse ensues. The user starts to try to outwit or outfox the trustworthy transformations. No longer are they necessarily focused on trying to outsmart or at least steer the generative AI.

I trust that you can see the difficulties that all of this presents. Do you allow a user to see the final readied prompt? If so, what can they do about it? Will you want the user to do something, even if the something might be astray? On the other hand, if you don’t show it to the user, they are possibly hopelessly deadlocked into not knowing what is going on with their entered prompts.

Various routes are being tried in this dilemma. One is that you allow the user to see the transformations step-by-step, thus it doesn’t get tossed at them once the entire foray has taken place. Another is that you allow the user to designate whether they are a rookie versus a seasoned generative AI user, and depending upon their presumed level of expertise, the inner workings are shown or not shown. And so on.

Returning to the overarching matter at hand, all of the same logic can be applied to the prompt result. Should we show the prompt result that first comes out of the generative AI or only display it to the user once the demasking and other transformations have taken place?

I’ll let you ruminate on that.

A Growing Marketplace For Trust-Inducing Layers Surrounding Generative AI

AI makers of generative AI are generally focused on the innards of generative AI and seek mightily to push the envelope of devising AI on a heap of high-tech AI advancing fronts. This is their core competency. This is what they live and breathe for.

Whether they are also pressing forward on the prompting side or the prompt results side is typically a lesser priority. There are more record-breaking considerations in their minds. For example, they want to get multi-modal generative into the world (see my coverage at the link here). They want to make generative AI bigger and better in terms of fluency (see my analysis at the link here). Etc.

If you were designing cars, the odds are that you might focus on making more powerful engines and making cars go faster and faster. You might also be dreaming about the day that you can make cars fly. Dealing with aspects of where people will sit in cars and what kinds of safety devices will protect them, well, that’s just not as exciting as making big-time breakthroughs, if you know what I mean.

To some degree, the inputs and the outputs are more so a user interface (UI or UX) design problem. Historically the UI/UX hasn’t been at the top of the list for heads-down AI deep thinkers. They figure that the inputs and the outputs are at the periphery of what they need to focus on (whoa, I am not saying this is true of everyone, and please know that many in AI have human-factors design as a priority). I am not saying they don’t care about it, only that in the pressing advances that they are most energized to pursue, the human interface side is not as world-changing for them.

That’s the bad news.

The good news is that this has opened a door for those who want to build add-ons for generative AI that strike at the near and dear heart of devising and boosting AI trustworthiness. This is a rare and valuable opportunity that has been handed to those who want to take it. There are windfalls to be had.

While many AI makers are putting these matters lower on their To-do lists, all kinds of startups and other software makers have jumped into the gap. They are, in a sense, leveraging an opportunity to bridge the trust gap with generative AI. The startups are eager to potentially sell such wares into the market at large. They can also potentially be licensed by a mega AI maker, or possibly be bought up for their capabilities by a larger firm that is trying to put together a full suite of offerings. Many options are possible.

And, in terms of the trustworthiness components for generative AI, somebody has to do it.

The beauty of this is that there is an absolute need here and you don’t have to try and come up with puffy reasons for why such components are desirable. As you likely know, oftentimes in the software field there is a solution devised out of thin air that is groping to find a problem that needs to be solved. Not so in this case. The problem is real. The problem is growing.

Solutions are wanted, sorely so.

Will generative AI makers eventually opt to do this themselves? One supposes this is a strong potential, akin to when operating systems providers at first ignored or didn’t have the bandwidth to strive toward OS add-ons, and then later on opted to either organically devise the add-ons or simply buy outright a best-in-class available provider. But that’s likely years away in the case of generative AI. AI makers have a lot of other big fish to fry, or so they believe.

Another angle is that some sizable big bucks software providers in promising or proven vertical or horizontal industry applications will tackle these matters. They don’t want to wait until the AI makers of generative AI decide to come around and do this. Might as well take the bull by the horn. If your industry application is getting pressed to connect with generative AI, and if you don’t want your application to bear the brunt of outrage when the AI acts in an untrustworthy way, you might have to grab the brass ring yourself.

Salesforce is certainly a handy example, and I have highlighted herein their efforts to showcase how having a trust layer for generative AI is gradually and inevitably going to be a visible and humongous deal. Admittedly, this all is a bit below the radar right now regarding the mass media concentration on AI. Media would prefer to pontificate about generative AI that has gotten more fluent rather than devoting precious banner headlines to discussing added layers that make using generative AI a better and safer deal for users. Not as sexy.

In any case, those who can see the enterprise problems associated with generative AI are working fervently to devise protective layers to contend with enterprise needs. Indeed, I’ve been working with startups that are devising these various generative AI trust layer components.

Allow me to briefly elaborate on how this emerging marketplace lays out.

A straightforward way to conceive of things is by using these four noteworthy classifications:

  • (1) Stage of focus: Attention to inputs or “prompts” versus outputs (aka “prompt results”) for devising protective or trust-inducing software components.
  • (2) Range of components: Whether a single standalone versus an assortment or at the high-end a full tight-knit suite.
  • (3) GenAI specificity: Whether the component is for a specific generative AI app or is generically devised and aimed.
  • (4) Immersion of offering: Whether the component is devised by a company of an independent nature or more so vendor-allied.

Consider an illustrative example that showcases those four classes.

A startup decides to devise a toxicity detection component. The component will be able to analyze a prompt and identify areas of toxicity that might have been included by the user when they composed the prompt. Realistically, this won’t be foolproof and there is always a chance that adverse stuff slips through. The same component can examine prompt results and seek to identify areas of toxicity in the output generated by generative AI.

An additional function will be built to try to detox the alleged toxicity.

Since the developers are mainly familiar with ChatGPT, they are going to aim for this toxicity detection component at OpenAI’s ChatGPT. Here’s how this comes to play. The toxicity detox portion is going to reword prompts that seem to have toxicity in them. The rewording can markedly impact how the generative AI will respond. Different generative AI apps react to wordings in different ways. In this case, they know ChatGPT particularly well and will first aim at devising the component to work with ChatGPT.

Later on, they will rework the software to apply to other generative AI apps.

They are going to make their component available on a standalone basis. It doesn’t link with other protective prompt engineering components. Their master plan is to someday build out their offering to eventually have an array of software trust layer products. That’s not in the cards right now.

We can now take a look at this startup from a larger perspective (I’ve bolded the particulars).

  • (1) Stage of focus: Attention to prompts and their toxicity, plus attention to prompt results and their toxicity (two components but of a greatly similar capacity)
  • (2) Range of components: Coming out the gate as a single standalone (future plans to branch out to become an assortment or maybe someday a full suite)
  • (3) GenAI specificity: Devoted to ChatGPT right now (later on include other generative AI apps)
  • (4) Immersion of offering: The firm is independent and not allied with any particular vendor or other software ecosystem

That provides a helpful summary of what opportunity and approach this startup is taking.

There are lots of opportunities to be had. You can select a multitude of stages for your focus. You can do a standalone or right away reach for an assorted mix or a suite. You can try to attain being compatible with just one generative AI app or a slew of them. You can be independent or strive to marry your offerings to a particular vendor or software ecosystem.

Lots of permutations and combinations arise.


Sophocles, the legendary Greek tragedian, famously said that trust dies but mistrust blossoms.

We are somewhat on the cusp of that proclamation when it comes to using contemporary generative AI. People are starting to come down from the head rush that occurred when generative AI broke into the public stratosphere. You might be able to use generative AI on an individual basis and not get overly worried when the AI applies to you or concocts an AI hallucination.

A business doesn’t have that luxury.

Spreading untrustworthy generative AI into an enterprise is a recipe for absolute chaos and endangerment to the firm. Lawsuits will be aplenty. The executives will be at a loss to defend their decision to embrace generative AI. Though they might contend they got caught up in the feverish headiness, courts and the court of public opinion will not be sympathetic to their missteps.

Rushing into this trust gap is an effort to bolster generative AI to be more trustworthy, but like a giant ship that moves slowly when maneuvering, the same can be said of AI makers revamping generative AI to garner magnitudes of added trust (they will squeeze in some, but at the same time might find new gotchas in the latest generative AI that once again detracts from building wells of trust).

We can attempt to surround sound generative AI. Put loads of protective mechanisms around generative AI. Bolster prompts to try and ensure the trustworthiness of generative AI. Boost prompt results to try and ensure the trustworthiness of generative AI. Package these together and make them into a trust layer that will confront the trust weaknesses of generative AI.

Trust, but verify.

That is the crucial mantra for business leaders who want to get ahead of the pack and adopt generative AI. You might not have heard much about these trust-boosting endeavors to date. That will change. The light is shining on generative AI and exposing the dangers and gotchas. The light will next shine on the means of mitigating those dreadful issues and the heroes that have toiled away quietly and determinedly to devise and field trust layers to enable smart and safer use of generative AI.

Those are the AI trust makers, so raise a rousing cheer for their Herculean labors.