Video: Measuring the ROI & Value of Agentforce: The Pre-Implementation Checklist | Duration: 3588s | Summary: Measuring the ROI & Value of Agentforce: The Pre-Implementation Checklist | Chapters: Welcome to Webinar (4.56s), Introducing Guest Speakers (93.49s), Agenda and Challenges (193.575s), Overcoming Implementation Challenges (298.89s), Rethinking Agent ROI (552.41003s), Measuring Agent Value (851.26s), ROI and Experimentation (1050.73s), Building Business Case (1324.385s), Agent Design Strategies (1591.28s), Analytics for Transformation (2310.405s), AgentForce Implementation Strategy (2525.25s), Mindset for AI (2666.52s), Rethinking AI Implementation (2846.84s), Pendo's Value Proposition (3082.08s), Beyond Efficiency Metrics (3175.865s)
Transcript for "Measuring the ROI & Value of Agentforce: The Pre-Implementation Checklist": Hello, everyone, and welcome to another Salesforce Ben webinar, this time in collaboration with Pendo. And today, we're gonna be going through measuring the ROI and value of an a of an Agent Force implementation. Now I'm sure you've heard of Agent Force by now. It's been a a pretty Agent Force heavy year, but still there are adoption issues. And although Salesforce is signing a lot of customers, there are still quite a lot of customers who haven't yet made that leap into, you know, experimenting or doing a business case or or actually implementing agent force end to end. So, hopefully, after today, we're gonna give you a very good idea of what that looks like. We've got a great panel discussion with you with some great guests We'll go into it in a minute. But just very quickly, if you haven't heard of Salesforce Spend, we are a media company in the Salesforce ecosystem. We have our website, salesspend.com, which you can log on to. We're publishing about 20 posts a week, I think, across all different kind of roles. We got our newsletter, which you can sign up to and get news delivered to your inbox. Events like this one, but also physical events. I think the next one we'll be at is Dreamforce. We've got quite a lot planned for that. Our YouTube channel and then also our podcast as well. So please check some of those out. And just to introduce myself, so my name is Ben McCarthy. I'm founder and CEO of Salesforce Ben. I've been in the ecosystem for thirteen, nearly fourteen years now, I think. So, yeah, super excited to host, this discussion. And our first guest we're gonna bring to the stage is Jake Wills, lead product manager at Pendo. Hello, everyone. Jake Wills. I'm the lead product manager. Endo, if you're not familiar, is a software experience management platform that helps you analyze and improve how customers and employees use software across Salesforce, your software, and AgentForce. We believe you can't measure agent ROI until you understand current work, where time goes, where friction lives, and which outcomes matter. Great. Thank you so much, Jake. And next to the stage, an ecosystem legend, Ian Gosse. Hi there. Thank you for the intro. Yeah, elements dot cloud, we're all about making sure you understand your business processes, you understand how Salesforce is configured, but we've also now extended that so you can use it for finding agents and building agents. So it's important that we're well prepared as we move into this new agentic world. So excited to be here. Thanks very much, Ian. And last but not least, the real MVP, Brandon Stauber. Sure. If I'm the real MVP, Ben, thank you for the introduction, Jake. Ian, nice to see you guys. Good afternoon, good morning, wherever your time zone might be. My name is Brandon Stauber, and I am, in our partner ecosystem at Salesforce. My role is to oversee our partner solution engineering team, and our goal specifically is helping our partners and, ultimately, our partners' customers adopt and innovate on AgentForce and our other exciting technologies. Really looking forward to the conversation today. It's gonna be a lot of fun. Great. Thank you so much, guys. Do you have any other slides or yes. Here we go. So here's our agenda. So, as I said, today's gonna be a panel discussion. We're also gonna be having a q and a at the end. So if you do have any questions for our panel, feel free to drop them in the q and a box, or chat amongst yourselves in the in the chat box. We're gonna be going through, firstly, the challenges of Agent Force deployment, adoption of ROI, then thinking a bit about the ROI that that needs to be proven for Agent Force. And because we're dealing with such a new technology, the game is kind of changing in terms of the old rules versus the new rules. Then building a business case for Agent Force, of course, very, very important if you are looking to, go away from this webinar and, you know, present a case to your leadership team. And then finally, some quick actionable steps in the form of a pre implementation checklist, that you're you're gonna need to have all the elements before you go into implementation. So Yeah. That is what we're gonna be discussing. So let's get into it. So Yeah. Firstly, challenges of agent force deployment. So Salesforce has has made many big claims about agent force, but, quite a lot of companies have not started their AI transformation yet. And maybe, you know, this could be because AI is the next new feature to well, I mean, it's not just the next new feature to implement. It's completely changing how work and what's possible. So maybe adoption is lagging not due to, like, the interest but due to clarity, what to expect, and how to measure success and and when to act. So, Brandon, coming to you first. When companies are hesitating on Salesforce adoption, what's the most common internal blocker you see, and how can people on this call or leaders in the company reframe those conversations to drive progress? Yeah. You know, I think, like, the probably the number one is really just this resistance to change and knowing where to start. And, Ben, you you mentioned in the in kind of introing the question, the kind of the concept is, like, is this just a new feature? But in reality, this is an entirely new paradigm of how to work. So I I think what we see and what I'm seeing is just this concept or perception that there's a lot of complexity in the implementations. There's a disruption of workflows. There's maybe skepticism about the ROI, a topic that we're gonna be getting to in a few minutes. There's probably also and I've heard this from, a bunch of CEOs and other technical leaders of the partners that we're working with. Is that is there a readiness amongst the customers in terms of the data, that's available and maybe the infrastructure that they have in place? So there's that. And then I think it's probably also mentioning that there's just a general fear of what agentic will have in terms of work displacement, and kind of the ethical and kind of customer experience issues. Yeah. A 100%. Thanks for that. It's it's yeah. It's, you know, often not one issue. It's often a few. And coming to you now, Ian, you you've done a lot around agents, and I know I mean, it might be worth giving you a very short intro to kind of elements experience with agents because I know you've got a lot. But, you often talk about clarity comes before capability. So what are the signs that a company may not be ready to deploy agents just yet? Okay. So the first thing is we we've probably got 20 Agent Force agents live that we're running across the business. We've probably got 50 plus agents using other platforms as well. In fact, we have agents on our org chart. We think of them as digital employees. So actually we and we've got a formal process for onboarding agents and for reviewing them. So we we've we've really gone all in and think, okay. How we're actually gonna make this work? So you ask the question, when is anybody ready? I mean, I think everybody is ready. It's very easy to go, oh, the data's we're not the data's not good enough. It will never be good enough. Oh, we haven't we haven't got our processes well enough defined. Well, I think the trick is pick an area. And I I think Balar Afshar, who's the chief digital evangelist for Salesforce, said the way to get started is get started. Now it doesn't have to be don't pick the biggest process and go, we're gonna fix, say, order the cash. Let's pick an internal maybe an employee facing agent and a narrow scope and start to build those muscles. So I I think the biggest blocker, I think, as Brian has said, is the fear of going, this has changed, and we're not sure how to get started. And and and you will never know unless you get started. We started on the agent journey probably two years ago when GBT first came out, And we've now built those muscles of how do you build agents. And and if you don't get started on that that journey with with and it doesn't have to be the whole company. Pick a few people who are evangelists inside your organization who care about this, and then give them the space to make mistakes and start to experiment. And I think, Ben, you said at the beginning, some people are in the experimentation phase, some have got beyond that and are now in the in the full on deployment. But you've got to get through the experimentation phase. And I think part of this this this webinar is actually having the discussions about how do you experiment, where do you get started. Yeah. A 100%. I I think there's been discussion in the ecosystem for a while kind of like, you know, well, that I've got technical debt. My data is not great, so maybe I won't implement agents. And, you know, I asked, I asked this guy, current company, but a large company that implements agents kind of, you know, where should people be focusing? And he was just like, well, you have to focus on both. You have you have to focus on AI, and you have to focus on your existing setup. It's not do one or the other. It's it's, you know, has to be both really. But, Jake, coming to to you next, to kick off deployment and adoption of Agent Force, where should teams be identifying use cases that that that deliver near term value while building also an iterative road map, you know, to to deliver constant ROI? Yeah. I think that that builds off of of what you and Ian were saying on starting experimentation, you know, looking for, some of your, you know, very obvious use cases around case deflection. If you're looking at support, places where you can really get validation out of those experiments, as you're starting. But then in parallel with those experiments, also looking really closely at your users, not just your action outputs, but understanding, like, how work is happening, especially non linearly across, Salesforce and adjacent apps. You know, we design workflows linearly. Salesforce is great because it is not that restrictive, and and work really flows in a lot of different areas and doesn't follow those those strict, kind of linear prescriptions that we design business processes around. So while you're experimenting, while you're earning your chops with how agents work, how you're gonna measure, also looking at where is work really happening, and that's gonna help you identify, layered over the the business process, the human behavior, where you're going to get the most ROI and the most transformation, from implementing agents. You know, looking at, where, you know, where does time get taken, in work? Where are there error hot spots or repetition or friction that are the areas that are that agents are really excellent at? And those are the things that you can then consider transformation and and change to to identify those use cases that are ultimately going to, to to let you work towards business case and ROI. Can can I jump in here, Ben? I think people confuse complexity with deployment risk. So you could pick a very complex agent, which has got very low deployment risk. So an example for us is we've got, like, a 26 step five action process, which is all around coaching our sales reps. Okay. Relatively low deployment risk. It's focused on our sales team. It's coaching them. If it doesn't work exactly as expected, we're gonna get great feedback. But you could then pick a very low complexity but very high risk agent, which would be, you know, answering licensing questions on our company facing website. Nightmare. And I think people tend to mix up complex with deployment risk, and we gotta pick low deployment risk agents to start with, which may have relatively low ROI to start with, but we we need the space to actually make some of those mistakes and get it right. Easy ones are I know expenses policy. K? Everybody needs to look at it. It's the policies are well defined. It's an easy agent to build. It's internally focused. So that's low deployment risk and low complexity, and it's an easy win. And we need to start thinking about looking at it through a slightly different lens than just, you know, what's the greatest ROI? Yeah. We'll end up we'll go for those later, but we've gotta get moving, and we gotta get some of those we've got to get a little bit of confidence and trust from leadership that this stuff is not as dangerous as it as everyone's making out. Yeah. Definitely. So someone described it to me in terms of implementation steps is one, firstly, do your internal agents then move to human in the loop and then eventually move to fully autonomous. But at least the internal is semi low risk depending on on what it is. You you mentioned then the the human in the loop, and Jake, you kind of alluded to this as well. It's like, I think that there's a mindset as well. It's like an, you know, an agent is not a chatbot. This is actually agentic is human agents working together. And so if you're looking at an overall business process and you're looking at how your customers are interacting, we should be thinking about ways to promote that collaboration between the human and the agent. And and, again, I I agree in exactly what you're saying is, you know, we can have a deployment risk that is is low and and start to get some get our feet wet really easily. Yep. 100%. Thanks, Brandon. So moving on to our our next section then, which is rethinking ROI, why agent force requires a new lens. Traditional ROI on software was was fairly clear. You know, it's reducing cost, feeding up tasks, increasing throughput. But as we've discussed, agents doesn't just automate steps. It transforms completely what is possible and how work happens. So although measuring ROI previously was never I mean, you know, I I know Pendo exceeded it, but, you know, never completely straightforward. ROI with AI now includes value generated by humans working more effectively, entirely new workflows that didn't exist before, and also outcomes that really influence top line performance as well. So with that in mind, Brandon, when companies start thinking about ROI with AgentForce, where do you see them using the wrong framework and may and what's the right way to shift that thinking over to the, you know, new standards? Yeah. As as I started off, I'd say this is really this this human agent agent agentic experience is a massive paradigm shift. And so we're typically focusing on or when our our partners and our customers are are focusing on, they're starting with a cost first or maybe an automation only thinking mindset. And what we really need to do is start to shift away from that to start thinking about what are the ways to look at, you know, more outcome driven, outcomes or or oriented, perspectives. So how can we focus on new workflows, that hybrid collaboration we were just talking about? How do we find value beyond efficiency? You know, and we're talking about potential revenue growth, customer retention, or a tread reduction, improve customer, or even employee experiences. And, of course, like, we really have to also be thinking about what are the metrics measure. They're not as easy as maybe some of the traditional metrics. So we have to be thinking a little bit out of the box, and I know we're gonna talk about business case coming up. So those those metrics are gonna be a key to that business case overall. Yep. Definitely. Thank you. And and, Jake, once teams shift their mindset about agent value and what's possible, how do they actually plan to to measure that ROI? Yeah. You I think you you start with, you start with KPIs. Those are the the the things that are going to show progress as you're walking starting down this experimentation to take those initial steps. But you need to start planning for those end outputs as as Brandon alluded to, starting with OKRs for the business and working backwards. And how are you going to prove those OKRs for the business, you know, based on the transformation potential and and, you know, the really outsized ROIs that, that's that that your your goals for for agent force. So I think you've you first have to plan to instrument the agent. You know, we're all waiting for command center to, to be released so that we can, start getting familiar and and, have that tool to really understand the agent's actions and and topics and measure that and be able to connect that to some of those initial KPIs. But you also need to think about measuring your your humans change, both, you know, your human behaviors before these, the human behaviors after that. And that's where really looking further out into business outcomes, things like revenue retention, customer experience, how do you map those back to your workflow journey patterns, where there's rework and errors, time and and stuff in your in your workflows. So planning for how you're going to have that insight, from a quantitative perspective and a qualitative perspective so that you'll understand how you're linking your agent interactions to downstream user behaviors and and and also see how those users are adapting upstream when they know those agents are part of the workflow collaborating with them. Yeah. Definitely. Yeah. It it's it's a nice reminder in a way that although everything is changing and what's possible is changing, KPIs, key business metrics don't change at all. You know, everything needs to work back from from those. And, Ian, coming to you, how should ROI be defined and perceived differently when evaluating agent force and AI versus the automation we're used to in, you know, the classic digital transformation? Well, I think, first of all, for many people, they don't actually measure what's the the they've been we're we're talking about having KPIs. Sometimes you don't even have the right KPIs because you're not measuring those things. That's the first thing. The second thing is agents, I think Brandon put made the point. We may actually be doing be able to redesign our processes so they work in a completely different way. There are things which humans weren't able to do that an agent can now do. And and everything that I'm reading at the moment from whether it's Accenture, Capgemini, McKinsey, Gartner Forster, they're all everyone's saying, you need to redesign your business processes and think about what what what you can now do and what's possible with an agent. A human won't read all 60 pages of the procedure every single time they can look at a case. They won't look at 50 cases and and evaluate them before they actually raise a raise a new bug request. So I think the first thing is that just going off the same metrics of getting, can we make this faster is not necessarily the right answer. I I think that's your mess for less. We need to think about how we redesign it. And sometimes the metrics are, yes, it's time, but it may be accuracy. It may be actually a better quality result because you've now got access to a lot more data. And I'll I'll give you let's let's ground this as an example. We had a process, which is where people raise cases because in elements, we do have bugs. But we may have four or five different customers all raise a case. That case could be there could be lots of different reasons why the case gets raised. There's a 45 page document that we've written, which which helps people evaluate or our customer support team evaluate those cases. They never used to read that all the time nor would we expect them to. But a case is an agent's really good at reading that. So the ability to have a an agent read the case come in, read that document, and then produce the to do list of these are the actions for support team to go back and ask the customer for clarity. With again, that wasn't about speed. Yes. It was a lot faster, but it was actually improving the accuracy of the questions. We then had a second agent who would then take all of the cases related to a bug and then create the bug record. And it would look at the case, all supporting emails, all the documents, all the transcripts. It would go and collect all of that, but then it would write up the bug in exactly the format that the development team wanted. So a couple of metrics that came out of that. Time to close a bug went from twenty three days to five and a half days. And that wasn't because we just the agent did it faster, because there was no back and forth all the time, going back with developers going, I don't understand what you're saying, back and forth. But the quality of the documentation, the bug documentation went from one out of 10 to nine out of 10, and the impact of that in terms of productivity. And then a really weird thing was our chief product officer said, this is an emotional benefit. K? Just just the time suck of knowing the documentation is wrong and reading it, He said it it's just soul destroying, and I can't put a KPI on that, but I can in terms of productivity. So there's some slightly unusual things come out. And and just to finish off this, I think that the early stage experiments don't put an ROI on it. Don't try and measure that first thing. Experiments need to be experiments. If they go wrong, they go wrong because it's experimentation. Yes. I think KPIs and then ROI is a a level of maturity you're now starting to get into how to build agents and working out with. But those early ones, k, there's no ROI. It's called learning. It's it's getting your muscles there. It's it's identifying people in the organization who want who are gonna be the thought leaders in this. So don't put ROIs on the early experimentation. Absolutely, you need to start putting those metrics in when you'll start when you're starting to mature. Yeah. Thanks so much, Ian. And, yeah, I I think the kind of big takeaway from this call is experimentation because it's just like like, I'm an avid chat GBT user, and I have been for you know, since it came out. But I'm still discovering new use cases, my flow of work, like, every single day. Like, something as simple as if I've asked it to help me produce something, then I go and I write something, just asking it to check it and see what it thinks, like, for not for whatever I'm you know, it's just very simple things like that, but I just feel like you just have to get stuck in and and then, you know, things things become obvious. Yeah. Ben, you know, it's really interesting that that whole kind of, like, tool adoption and then moving on to the next tool process that you go. You you start to do something, and then it just becomes part of your daily routine. And then you go on to the next thing, and you're starting, again, experimenting with the next thing. And then it becomes like, oh my gosh. How did I, you know, get through my day without having this tool available? So that that maturity and I think one thing we often forget because we're all here in tech every day is that most of the folks are nowhere near along the kind of the maturity level in terms of the adoption or exposure to the technologies that we are. So it's you know, we have to start with some small and simple experiments, and, you know, I love the the idea of of, you know, reducing the ROI or eliminating if possible. I don't know if I've been able to effectively eliminate it in many of the conversations that I've had, but I will certainly try again. Yeah. No. Definitely. And I and I think we are reaching a bit of inflection point, like, kind of Salesforce aside with AI at the moment. Like, I'm hearing so many stories of, you know, my friend's parents that are starting to use it and stuff like that. And I think it's I think we're a bit of a bit of a tipping point at the moment. But yeah. So moving on to, this all no. Sorry. Second to last section, building a business case for Agent Force. So, you know, Agent Force is freemium. Well, it's freemium. Not no quotation marks needed. You know, you can download from foundations. With foundations, you can get starts on it. But, of course, any meaningful implementation, is gonna cost money. Agent Force is gonna cost money. So, you know, your leadership team, executive team are gonna gonna gonna see a business case. So, let's come to Jake first. So so, Jake, with a business case, what are the essential components, that need to make up a business case that quantify agent actions and customer experience while also showing long term effects on the human work and the operating model? That was long winded question. Yeah. I think, I think the business case, first of all, has to be iterative, along the lines of what we were just talking about of accommodating for the experimentation and the learning, and showing the showing the value of that as well as your long term vision, you know, towards towards transformation. And I think within kind of each of those phases as you evolve, as the maturity evolves, there's, like, two main pillars around, quantitative and qualitative and and how you're going to make the the business case of the results that you're looking for, you know, and and what's the value of that. So in a quantitative, context, again, looking at what's the performance of the agents and how are we improving that and iterating on that and and the direct outputs of that. What are those human performance metrics that you're that you're going to look at, and then also looking at the qualitative. Like, Ian spoke to, you know, that emotional impact. You know, there's, you're not gonna come out with numbers on that, but you can still, you know, measure sentiments and and measure feeling. And that's both for your workforce, their experience, their effectiveness, and for your customer experience and what are your outcomes for your customers. Even if your agents are initially employee facing, you know, that they should have that impact on, what ultimately your customers are and and and your revenue and your delight and your NPS. So, you know, in in an example of how Pendo got started, you know, we went we're going through that experimentation phase, you know, looking at sales operations and and support tickets and and ways that we can do that. And we're measuring that quantitatively, and we're measuring it qualitatively, like getting feedback from the people interacting with those agents. And, of course, using Endo software experience platform to understand how their their workflow might be changing. But our business case is looking at long term transformation. How do we move, plan to move from CPQ to revenue cloud advanced and and have the agent force be part of some of those harrier workflows around CPQs and and that. So our our business case, you know, comes with that iterative approach to get through learning and long term transformation, but then it also touches on these quantitative and qualitative pillars, so that we have different ways to measure and and be able to prove that value. Yeah. That's great. Thanks so much, Jake. I I once heard Salesforce ROI summed up in three nice bullet points. One being productivity, one being employee happiness, and customer happiness, which I thought was quite nice. So you can definitely think about those alongside everything else you've you've spoken about. Coming to Ian now. So, Ian, how should teams decide whether an AI agent is the right fit for a use case in a redesigned business process? And once they do, how do they reflect process clarity in government and design effort in the cost side of the implementation? Okay. So I I I this is I think this is really important because Jake came up with some great metrics for how you measure the benefits, but we also need to understand the cost because the cost is not just the cost of the licenses. Actually, what we're seeing is organizations need to do a a level of process redesign, and you need to you need to obviously, that that takes time and effort, and you need to put that in. The issue, I think, with many Salesforce implementations, we we we look at a lot, I mean, is that I don't think people have have actually documented those business processes in enough detail. K? Front office processes, it's easy. You but if if they're not well documented and the application doesn't quite work, humans make the make up for it. They're the the workarounds. I use a spreadsheet for that. I I ask a friend. So there's all of that. Agents aren't very good at that. No. No. Let let me rephrase it. They are awful at that. That's what that's where you get hallucination, where you get unreliability. So I think the first thing is people really need to actually dig in and and understand there's a lot more detail required, a lot more clarity, a lot you have to be a lot more explicit around the business process, about how things work so that you can instruct those agents. I think we've been seduced into the chat g b t, oh, I gave it three lines and it wrote me poetry. Okay. That's cool. I get that. But actually, we want this to go through and, I don't know, write up a case or generate a bug record or, recommend the top three accounts. We want a lot more accuracy in that. So I think we the the the business analysis discipline of going really getting into it. Okay. How does this really work? If you've documented that well, we built an agent that will look at those process diagrams, and it will identify the agent use cases. It will actually walk the business spec and say, I think this is a good agent example, and also, this is a conversational agent versus an AI workflow agent. So if you've got that clarity of the business process, the agents drop out relatively quickly. If you don't have that clarity, people tend to go, oh, I could do an agent there. Oh, there's an agent made and you start grabbing at agents rather than thinking about how it will actually improve the overall end to end process. Mhmm. We can learn a lot from fruit. Again, what's this all about? Like, a bunch of bananas. You don't see one banana in a bunch of bananas that's ripened in the restaurant. They all ripen at the same speed. K? There's no point having an end to end process where you've optimized one bit, another bits are slow. You need to end to make sure it all works consistently. So you need to bring it all forward, which means you need to be designing that process and then working out every aspect where you put different agents in to move the whole process forward rather than getting making one bit really good, and then you end up just with a massive rate bottleneck because it everything's sort of gone really quickly to there, and then everything after that's a bottleneck. So think about the whole end to end process, get clarity on that from a business process perspective, and obviously, I'd suggest using elements. But then again, the agent designer agent piece is free, so there's no no limit and there's no cost on that. But once you've got that clarity, then the agents will drop out of that more more easily, and you can start to see where the ROI is. But don't rush to build an agent just the way we rush to building Salesforce. Agents require you to do that. Redesign, rethink, reengineer, and and it's hard thinking. People tend we're doers. We don't like to think we like to do, and we need to try and bring that back together. Let's let's do decent design before we get started. Yeah. It's it's a really important point because, I mean, you know, our common messages here, you know, experiment, jump in, da da da. But when it comes to implementation, you obviously wanna make sure you're focusing on the right the right business process. Well, hang on. No. No. No. I didn't say jump in and experiment. Experiment means do those things in experimentation mode. So don't take this experiment, meaning we just get to play with agents. Experiment means experiment with what you should be doing just on a narrower scale. K? Yep. So even if you're experimenting, still go, it's narrow scope. We're gonna think about this business process. We're gonna do the design, and then we'll build an agent. Experiment doesn't mean try and build an agent and see what happens, because we're not then doing what you should be you're you're not experimenting on on how you should be working in practice. The reason experimentation gives you the time is the time to change the way you work and do and do the design piece work properly, Think about governance properly. Think about build properly. Yeah. Very important. Thanks, Iain. Brandon, coming to you. So back to the still talking about the business case. What elements must the final Agent Force business case contain to win executive approval, and how do you translate agent capabilities into concrete financial and strategic benefits? Gotcha. That's a that's a great question. Obviously, every company has its own executive, personalities. And so I think approaching with a lot of empathy and understanding what your executives are concerned about, if it's customer experience and your business case needs to have that front and center, if it's data quality, that is key. Human oversight, I was reading over the weekend this concept of a guardian agent, that kind of oversees what's going on. And I think, you know, Jake and and Ian, given your process and and insights that Pendo brings and Elements.cloud bring. Like, there's certainly an element there that you could be thinking about what a Guardian agent might do. So I think, like, looking at that at the highest level is is key. Obviously, aligning around kind of the strategic and financial benefits is is crucial. So you wanna have objectives. You wanna have that scope defined. Ian, I love the idea of kind of that that experiment. It's like the low blast radius, right, to start off and do something, of a small blast radius to start. Understanding the cost categories, someone mentioned it. It's it's not just the the cost of the agent, or the or the credit consumption. It's the time that it takes. It's the monitoring, the ongoing maintenance, and kind of ongoing improvement of it, understanding the risk and risk mitigation efforts that are required. So that's super, super key, particularly as it relates to data. I think Salesforce has got a great story around our trust layer and, obviously, with our deeply unified platform. It's an area where we are where we have a distinct competitive advantage, in that overall. And then kind of getting to that success metrics is is super key. Right? In some cases, you know, reducing the the average, handle time and and of a case is important. But in health care, that might not be the right metric. So I think it's really important to be thoughtful about what those metrics are. Are we talking about CSAT? Are we talking about MPS? Are we talking about cost savings? Are we talking about revenue uplift? Are we talking about retritt a retritt reduction, overall? And I think thinking about how you can translate those actual agentic capabilities into benefits for the enterprise is really key. So, again, it's revenue, it's cost efficiency, it's customer experience. I think the final thing I would say in terms of doing a business case is because this is so new that I would urge everyone here is to give a checklist to your stakeholders and your executives on how to think about evaluating the business case. You know, frame it for them, because oftentimes, you know, they're gonna take it, you know, from the the traditional math models. Like, is this just just an efficiency place? Is this just a cost play? We need to be thinking a little bit beyond that from an overall perspective. So, put that into the structure, give them that framework for, for the evaluation of it, and I think you'll be off to the races. Yeah. Great. Thanks, Brandon. I I like to call out on guardrails and the kind of risk mitigation as well because, you know, if if if executives haven't dived into AI, yeah, they might have seen the the headlines about whatever that, car dealership in The US giving a car away for a dollar or something something like that. So, you know, of course, that's gonna be top of mind for for anyone implementing AI. Cool. Going on to our final section, we're gonna do a bit bit of a q and a at the end if you have some time. So please continue dropping in any questions you might got. But, yeah, we're gonna go into pre implementation tactics and go through kind of three core areas around, actionable tips you can take away. So, coming to Ian first. Ian, before writing a single agent instruction, what concrete steps should organizations take to ensure iterations are fast and ROI data is easy to collect? I I think agent design is the most important thing. Agent architecture and design. So you gotta think about, okay, what's what's the business process we think we are going to be applying an agent to? So have we got that clearly identified? Then we start the thing about an agent is we need to give it some fairly explicit instructions. We're we're not building a workflow here, but you wouldn't take an employee and go, Brandon, you've just joined the company. You you seem pretty bright. You're on customer support. It's like, what does that mean? We we wanna give you a job to be done, and that's quite a narrow scope. And I think architecting the agent's the first thing, which is, k, a job to be done is I want Brandon, I want you to look at every case, look at the account, triage it, and then, then tell me the top three that I need to address. That's a job to be done. So I think the first thing is architecting the agent so they're tight enough. So and then once you've got that, you can do an agent design, which then helps you drive and I'll say about how I would write instructions or how I export the instructions. And I think it's that design piece is really important because not only is it helping you understand how the agent works, but it's giving the stakeholders confidence that what this is what the agent's doing. And I think that the challenge I'm seeing out in the marketplace is senior execs are going, well, I'm not sure I want to go live with that agent because I don't know what it does. But if you can here's a diagram, an agent diagram. This is what the agent's doing. You can see the feedback loops. You can see how it's working. Now now you can put the the testing we've done in the context of that. It's not a black box. You've got a better understanding. I think that helps you get the agent deployed. Often, the time to get an agent deployed is not the design and the build. It's often getting senior execs to sign up and go, let's do this. And that's why we're we're getting agents deployed so quickly is because we're engaging them from the very beginning with that design process. But then we've got the we've got the diagram, which is the governance piece. So they've got some confidence that they've got control over the agent. And I I think, we haven't yet really spent enough time in thinking about what governance really means. It's more than command center. Command center is a great start, but it's a lot more than that in terms of just giving companies confidence that this this employee that we've now hired, this digital agent, we've got the we've got the governance around it to actually make sure we understand how to how to hire it, how to how to promote it, but then how to sack it if we need to, how to or how to make it obsolete, and just that whole cycle. There's a there's a bit more to this than just we built an agent. Isn't this exciting? And and I think senior stakeholders are worried about that, and we need to we need to take them on that journey. Yep. Definitely. Coming to Jake now around analytics. So prior to launching, People's First Agent, which analytics foundations are essential for capturing both agent performance as well as the downstream effect on the human users? Yeah. I think that foundation starts with the design process that that Ian was talking about. And there was a comment in the chat also about workshopping, whether you're talking with with leadership, with the people who are doing the work right now. That's going to set your foundation that you're gonna build your analytics and your and your measurement on top of, informing how you design, whether that is that very narrow, as Ian said, during experimentation, or your your longer term vision. You're not gonna be able to measure your, you know, your agent outcomes, your human outcomes, and ultimately your ROI unless you understand your current work, and then, understand, you know, how work is going to be done when you when you implement these changes. So I'm coming back to things I've mentioned before where time goes, where friction lives, but which outcomes really, really matter for for your OKRs. So I think, you know, the an an analytics platform like Pendo can give you lots and lots and lots of numbers, and those can be that quantitative and qualitative. But that can be a sea of data if you haven't set that foundation, or what you're gonna measure as you are iterating through this this kind of, you know, transformational change. So, I think you really have to pair that, agent performance data with the human data. There's, Pendo's using a a product we're building for agent analytics that, will work really nicely with command center and provide that connection to, how your, how your agent performance connects to human performance and human workflows so that you can connect all of that together. And so I think, you know, that's a a more mature motion that's that's, you know, looking at the long term transformation, but starting to plan for that and and factor that into what you can measure now even if it's in that narrow experimentation phase how you're going to iterate your measurements broader and broader and broader as you're looking to prove your ROI more and more broadly for the business. Yeah. 100%. I think those analytics are gonna be so key to, you know, if we've got I mean, how many agents did you said you have been? Thirty, twenty five? We got if I think about agent force agents, we've got 20 plus live. We've got another 30 in in pipeline that are being built. We've got probably 50 agents across the business of other types. I think back to Jake's point, the metrics are great, but having them put back in the context of, okay, what how are they improving? Actually, this agent's doing a great job here, but, actually, now the bottleneck is there. So now we've now that generates a business case for the next stage that we need to go and build. So you can start looking up and down that the supply chain or the the the value chain and go, okay. Well, what's the next stage that we should pick off? Because now we we we're getting some more benefits here. But you you need the numbers then, and that's that's why I mean, Command Center and Pendo having that stuff in context makes not so much more sense. And, again, it's a level of maturity, isn't it? Get the experimentation done, then then get some operational process and start getting those measurements, then go, okay. Well, what how much better could could we now get to? Yeah. 100%. Sign me up for a demo when that's ready, Jake, because I'd love to see Pendo and Command Center working together. Absolutely. Cool. And and finally, coming to Brandon. So on the Salesforce side, which kind of Salesforce org specific tasks most influence an agent's ability to deliver measurable results quickly? Yeah. I I'm gonna get to the tactical in a second. I just wanna come back to something that that we've been talking about is this is a paradigm shift. So So I think when we start thinking about adopting AgentForce, like, the the really, the kind of the maybe the first internal conversation really needs to be, how does AgentForce align to the overall platform vision that that you have from your CRM implementation? How does it, like, implement or impact or or contribute accretively to those automation investments that you're making as an enterprise? And so thinking of it from that perspective, kind of a a bigger perspective strategically is is super important. To get to some of the, you know, the tactical things, I mean, it's it's, you know, no mystery that you need to have that gets down to the knowledge articles, making sure you're auditing the the key Salesforce, fields, making sure that you have, unified customer profiles. A great plug for data cloud, right, to, you know, have a a complete customer three sixty or partner three sixty. Having a command center dashboard, you know, from the beginning, just understanding what is it that you're you're measuring, what what did you build, and what is the impact, and what's the measurement gonna be. So tracking those key metrics, is key. Command center is great. That's, you know, coming up. At Pendo, obviously, you've got some great insights, I think, Ian, to be able to pull in kind of the the process, spots to understand where the next bottleneck might be. I think it's also really important to provide coaching playbooks to the folks that are using these agents and those that are impacted by them. So they understand what are the behaviors that are actually happening. I think, Jake, as you're looking at the behaviors of of customer through, it's also what's the impact of those behaviors to the folks, on the inside using Salesforce. When if you haven't really good insight and kind of visibility to those behaviors, you're gonna have a much better lens on the ROI of the of the agents overall. Yeah. Definitely. Thanks very much, Brandon. That is that is all the questions. So thank you very much, guys. We will now go over to the q and a. So let me just see what we've got here. So maybe maybe more of a general one here. I think this came in from Andy. Yep. Andy Brown. Thank you. So you said, would you agree that there is more of a mindset shift than a technology one needed? Can I slightly that up to start with? Because I think that's I've I've been I've been spending a lot of time thinking about the implications of of what a new agentic for an AI first company looks like. And you as I said, I I referenced it earlier, you read any of the reports coming out and everyone's saying the agentic world is not gonna be, we'll take a few workflows or a few, and then we'll we'll turn them into agents. I think some of this is is actually fundamentally redesigning some of the activities that you do and saying, we've just hired this new employee, digital employee, who has actually got a bunch of superpowers. It will read a 200 page document. It will try things 15 different ways without getting hacked off and come back with the best answer. It will, it will read it will read through 700 records and then come back with an answer. And I think a lot of our existing business processes were optimized around the limitations of a human. And if you take those away and go, okay, if we didn't have those limitations, what could an agent do? But agents have got other limitations. So how do we then mitigate for those? And I think it is a mindset shift because we've actually gotta think about, okay, how would we come back at every single operation and go, how could this be better and geominally rethink it? Some things may may not change, but we've got to come back with that thinking rather than going, I've got some technology. How shall I apply this? Because I think that you're only optimizing what you've currently got rather than rethinking what's possible. Now, but we're really in call base. You've got Salesforce and the core data. You've got all those workflows that Salesforce it's easy to build that back end. What you need to do is rethink what the front end looks like and and actually whether it's an employee experience or a customer experience or a partner experience and how much better could that be and and focus on that. And I think the mindset shift for certainly for the customers we're talking about is spend a lot more time on the analysis rather than rushing into build. Give people the time to rethink things. Have the debate rather than going, we're behind the curve, Build me an agent because I need four agents by next week to make sure we're ahead. Right? Spend the time upfront rethinking what you can do, what's possible, and then you will sprint ahead rather than running forward. I love that, Ian, because it it takes, it takes me to my my product management background and, of course, many Pandu customers are are product managers, and and they look at their Salesforce implementations as their product, you know. And it's it's, comes back to those core product management principles of you sometimes you you need to spend more time thinking and designing than you do executing. And it's the same, you know, avoiding the trap of product management to becoming a feature factory. How do I bolt this on? How do I bolt that on? How do I change this and add that? And taking a step back and and really, you know, thinking about rethinking what has value for the humans in your product, where agents gonna gonna do those things where they have superpowers, and and and really targeting where a product manager is is trying to look, which is what are those end experiences and outcomes, whether that's your employees or or your customers directly or from the downstream, outcomes in terms of, you know, business outcomes and OKRs. Yeah. You know, Jake, it it it's not lost on me that, like, we're often pressured as product managers to focus on what are like, the first level questions that our customers are asking for. And and we know that they're like, oh, I want agent availability. I want response times, which are all kind of, like, level one questions, and we need to be thinking a little bit beyond what those level one questions are as to how this human agentic experience is really gonna change. I think another key element is, you know, getting to, like, as, an enterprise, the people behind the agents, what we need to be doing. And and one of the things that as as a leader of a team that I'm always thinking about is, like, how are my team members using AI to redefine, their work? And so, you know, in terms of what are they doing and emphasizing that human agentic experience, to deliver new value and new new ideas and new capabilities in the work that they're delivering. And so there's, like, an upscale and training element that we need to be doing internally with our with our own companies to make sure that we're thinking about it beyond just those level one, questions. Can I jump in here? I think that I spend my whole time I've I've always spent my time running programs and projects, and you're always trying to hook into I I know all the behavior change from senior management. Where's the catalyst? Right? How do I either create some catalyst that has but this is a perfectly made one, which is a senior management going, wow. The agentic thing's happening. We need to get on board. This is your chance if you're an admin, an architect, business analyst to hook onto this and go, if you wanna do agents, give me the time and the space to do this properly this time rather than forcing me to be the feature factory and go, look. Just just get this thing built. And you need to grab onto this and go, like, senior management don't know what the right answer is. You can help them give the right answer by finding all the not like, the the Accenture article or the McKinsey article. We've got to redesign our businesses. Let's do the proper business analysis. Let give me the space to do that. Don't tell me I want three agents. Give me the space to do this stuff properly, and I will blow you away by what's possible. But if you force me to build build you three agents, yeah, I I can build one, and then you'll be disappointed. So I think this idea of finding the catalyst and senior management, okay, we've got let's do this properly this time, but give me the space to do this in a way and just on a small scale, prove the pilot, show the benefits, and then let's roll that approach out rather than forcing me to do something very quickly, which will be suboptimal. Yeah. Great. I hope Andy was taking notes because there's a lot of great information there, but this is recorded as well. So but, Jake, a question for you. What makes Pendo suited to help organizations not just launch AgentForce but also sustain a drop adoption and scale them across the business as well? No. Not on mute. Yeah. That's, you know, Pendo, as I mentioned at the beginning, you know, we are a a software experience management platform kind of across, not only Salesforce, but all of your software. Whether that's your employee software or your external facing software. And so, we believe we're uniquely positioned to, to be able to provide that foundation that you need to prove that value across everything that you're doing. But whether that is, in your early stages as you are experimenting and you're being very narrowly focused to, to take those first steps or as you're looking brought more broadly to your transformation. So, you know, we're investing heavily in, as I mentioned, the agent analytics, and being able to really pull, the the human experience and the and the agent, impacts together and tell that story. So, we've, we would love to talk to anyone, who's, who's watching today or watching their recording and and be able to demonstrate more about, about how we do that. Great. Thanks so much, Jake. I got one more question, and then I'll I'll hand over to Jake to kind of finish up ourselves with the final slide. But, I think this is another this was, yeah, a question for Todd. It's quite long, so I'll try and summarize it. But, I would love it if we could touch on the really important KPIs and metrics beyond just efficiency and cost because I know we've spoken about it across this webinar, but I do think it's it's very important and it's perhaps something that people aren't used to. So, yeah, who wants to kick us off? KPIs that aren't just efficiency and cost. Acc one could be accuracy. I mean, there's there's no point having an answer really quickly if it's the wrong answer. It's accuracy is the first one, which might be a better written requirement spec. It could be a better answer around a more complete answer around support. So and getting to the correct answer more quickly, I think is one. The other one is customer friction. So the idea if if you had an agent that would happily know a bit more about me, so I don't start every conversation, well, can you tell me who you are? And what's your email address? What product have you bought from me? Well, you know that. You if I you know who I am, you know what I bought for and you know all my back background support cases. Why do I have to tell you that? You should know all that. Now it may not be the agent, but it may be that's being served up to the customer support person, so they've already got that in front of them. And that means that we're taking away a lot of that customer friction. They're calls that I don't want to make. You you've asked me to call you to update you on something. You've already got half this information. So I think customer effort score was something people used to measure all the time, which is different from customer satisfaction. It's like, how much effort is it to deal with you? Especially when I don't want to. I'm happy buying from you, but now you want other stuff from me and I don't and you already know that, so go and find it for me. So I think there are some different angles we should be looking at, which are which are based on, having access to a lot more information and being able to validate that information rather than you having to try and either either an employee having to collate it or you're asking for the customer to collate it for you. Yeah. Thanks, Sam. Anything to add, Jake or Brandon? Yeah. I would I would add to that. I think, you know, there's an inclination to to shy away from some of that because maybe it's hard to measure. And, you know, so I think that, you know, leaning into qualitative, leaning into sentiment, is really important, not shying away from that, learning how to quantify that, and and and ultimately tie that to, you know, business outcomes like increased revenue. It's great if there's less in, you know, customer friction or customer effort involved in in doing that. But how can you tie that through to that actually resulted in expansions, that actually resulted in reducing my churn or customer retention, you know, and, taking those to to the next level because that's that's the the promised land of the of the business cases and the ROI that, that that we're trying to build here. So that doesn't come easy. It comes with, with with all the intention around design that we're talking about and understanding the broad implications, but also that that that measurement and evaluation and reporting and and building trust so you can connect all those dots through to to get to that promised land, of of of what you want from your software. Yeah. We have a million mechanisms for quantitative assessment KPIs. We have so few for the qualitative. And and I think what our our executives are often stuck on the on the quantitative because that's all the measures that we use across the enterprise. So as part of the business case and and I would say even, like, earlier, like, as you're doing the the jobs to be done workshops is start to identify those qualitative measures that are impactful, have the conversations early. And you're not gonna necessarily get it right out of the out of the gate, but at least you have, people are are using the same language as they're talking about the impacts of these agents as they get deployed. Yep. 100%. And and I think also, like, some, you know, AI is gonna change customer support and customer happiness to some businesses. So the ones that kind of don't make that shift, you know, the gap is gonna be massive between, you know, companies you enjoy working with and and others you don't. So, yeah, super important. Alright. Well, we're nearly at, the hour mark. So thank you very much for everyone for staying on with us and, to Ian, Brandon, and and Jake. I'm gonna hand over to Jake just for a second. I think you've got one slide just to finish up on that my colleague is going to put up, and we do final words. Over to you, Jake. Yeah. Thanks, Ben. Yeah. Just a a word of thanks from everyone at Pendo, for for spending the time today, and thanks to Ian and and Brandon and, of course, Ben for for putting this on. Just to, to bring back up, you know, Pendo is, really, as as our as our founder and CEO, Todd says, you know, on a mission to improve the world experience with software. And we have tools for quantitative, qualitative, visual, and sentiment, to measure those things and and help you get started. So if you'd like to have a conversation and and learn more, there'll be a, a link in the chat, and, we thank everybody for for coming today. Great. Thanks so much. So, yeah, just to finish off then, I mean, thank you again for everyone that's joined today. We had a massive reaction to this topic. I think it's no surprise really. It's it's, you you know, use cases, implementation, agent force ROI is just such an important topic at the moment. So, yeah, thank you very much for joining and and to our panel. Any final words, Brandon? No. Just thank you, Ben, for hosting us today, Jake. And great to be on a virtual call with you and just really enjoyed it today. Thank you very much. And, Ian? The way to get started is get started. Use this as the the cap distraction. Yeah. Thank you. And finally, Jake? Yeah. I I couldn't agree with, with Ian Moore, and I just want to, thank you all for, for the honor of, of of the discussion today and and and everything that, that we talked through. It was, it was a great experience. Appreciate it. Great. Thanks so much, guys. Thanks everyone for attending. See you soon.