Karim Fanous - Engineering Management in the Age of AI
Matt: [00:00:00] Welcome to Cloud Radio, made for full Stack cloud operators. Cloud radio covers all aspects
Karim: of the business of software.
Matt: Excited to have Kareem Ous on the show today. I've first discovered him through Substack, where we're both writers, and he writes. About engineering management on a blog called cumulative.io, and we'll have that in the show notes.
And I've, as a non-technical business person and investor, been reading for years now and just gotten a lot of insights out of it and wanted to invite him on the show. And as always, I'll let him give his background instead of me mangling it. So Kareem, thank you. And wanna tell everyone a little bit more about yourself?
Karim: Yeah. First off, Matt, thank you for inviting me in your show. I've also been a follower of yours for multiple years now. I [00:01:00] enjoy your substack and you know, obviously it's more business oriented, which is very, very helpful for me. Uh, quick background about myself. I am pleasantly head of engineering at a cybersecurity company called Strong dm.
I've been here for a little bit over four years, and my background is entirely in engineering management and software development. I've, uh, spent 25 years so far. In software development across different roles, mostly over the last 10 plus years working at startups, um, joining early stage and then well trying to grow these businesses and make them large, successful companies.
Matt: Awesome. We'll try and structure this episode a bit on engineering management. VPE type titles. Then ai. And then lastly, security. And one writing that stood out to me is you've described a VP of engineering's role as programming people, plus product plus processes. Could you elaborate on that? And [00:02:00] then what do buggy processes look like from afar, kind of at a board level or on a CEO level, and and how do you debug them?
Karim: Yeah, I mean, the job is you're ultimately responsible for delivering on the product that the business is going to sell. Okay. And that product is going to get, you know, built by a combination of, well, there's people and tools and processes working in concert. To get this finished product out into the market.
So the job is orchestrating all of those three different protagonists. You know, you're building teams, you're hiring, you might be not be hiring, you might be parting ways with people that you're adjusting and building that team that is going to be able to become the people that builds that product. And as you grow this organization, you might start with a handful of engineers and maybe you grow to a hundred to a thousand.
You're gonna have to adapt and implement some processes. I know the word process is a little bit of synonymous with bureaucracy, but as you [00:03:00] grow, you're gonna have to change and adapt to how you're running this organization in response to scale. So the job is you're ultimately responsible for, I want to give a widget out, put a widget out the door that my Sanders can grow and sell.
But behind the scenes, that's going to be built by a combination of people, tools, and processes. And you're just sort of like that orchestrator of those three or four different ingredients if you are looking at it from afar. So you're, you're at the board level. A body process, and I like that term, is almost going to get manifested itself into business metrics that are misaligned with where you want business to be.
Early on in the lifecycle of a company that is almost going to be synonymous with growth. You want to be growing out X percentage and you're not growing that fast. So the board might question like, do we have the right product? Are we building? Are we making the right investment in the product? Is the product that I'm put push I'm putting out there?
Is that the product that the market [00:04:00] needs? Do I have a quality problem? It is also another thing that the board might ask. Am I be putting the right product out? But the experience is subpar. It is buggy, it is not meeting customer expectation. And then the other part of that buggy process could be you're putting the right product out, but you've gotta go to market problem.
You're not, you're unable to take that product and put it in the hands of willing enabled buyers as you move closer to the go to market side. The engineering's role is somewhat diminished, but it's not, it's not zero. Meaning if you've got a go to market challenge, it still doesn't absolve me from raising my hand and say like, I'm done.
I gave you the good product. And, you know, adios, it's your problem. It's not, you're part of that puzzle and the closer you are to the engineering side where it's a quality or you're not, you are unable to, to put the right product in, in the market. That's squarely, I dunno, in, in the hands of the engineering function.
Matt: Interesting. And just. Taking that a different way [00:05:00] because you know at the board level there's certain level of metrics you can look at, like kind of Dora style things about code commits or quality or productivity measures. You might not fully understand them. In your experience, have measures and dashboards mapped well to actually buggy process or, or diagnosing things, or are they just kind of interesting to look at?
Karim: At the board level, I usually never surface these metrics to the board. A board doesn't really care about your door metrics. I mean, they might out of intellectual curiosity, but they really shouldn't. They really care about the outcomes. Okay. And the outcomes are, is the business growing? Are you growing at the A clip that we expect you to be, especially in a venture backed business, they care about cost or leverage.
As you grow, if I'm doubling headcount every year, but my business is not, I'm not getting any leverage, well that's a problem because ultimately you wanna be able to show to the board that you're able to build a profitable business. [00:06:00] So in time, you want to be able to show that leverage, that I'm able to build more, but the cost is, or the auto I is increasing.
So I think those are the metrics you want to, to focus on. The secondary metrics, like data metrics, commits lines of codes, whatever metrics you're using within organ uh, engineering, they're gonna help you understand where the problem is. So let's say I've got a problem with quality, okay? And I go to the board and say, Hey, we've got a big problem.
The, you know, the product is, is not a really good quality. We've got escalations left, right, and center, and we've got customer churn and whatnot. Okay? That's the problem. And I need to look at the underlying metrics to figure out, okay, where is, where's the fire at? What, what do I fix quality wise? Where do I pour more resources to stand that, that problem?
But I'm not gonna go to the board and, you know, show them I don't know, you know, pass test failures and regression tests. You wanna show them the business problem. Or the business metric that they care about. And internally, if [00:07:00] they ask you like, how are you gonna solve it? You absolutely need to be prepared to show Dora metrics and things like that if you have to.
Matt: That's good to know. I I, I think there's an element of intellectual curiosity that might spark some of those, but it's good to hear the ground level truth there.
Karim: Sometimes board members will test you to see if you know your stuff. Okay. Yeah. So, and they'll, they want you to be audible ready? Okay. Much like you might be a CRO and ask you about your pipe.
Okay. And if you don't, if you don't know the answer to that, well okay, that's a bit of a problem
Matt: for sure. And, and that's a good transition point too for, you know, non-technical founder, C-E-O-C-F-O, what are some good but non-obvious executive interview questions to vet a VP engineering A CTO. And, and perhaps that show that you're not just someone who could talk the talk, but you can walk the walk.
Karim: That's the answer actually. That's the gist of it. I think the questions I like the most are, you are leading a function, but that [00:08:00] function exists within a business context. Okay. We're here to, to build a company, a profitable company, and you want to be able to vet that engineering leader to see if they're able to see, to connect these dots.
It's one thing to say, you know, evaluate them on their technical competency and their ability to, to, to lead an engineering organization, which is absolutely important to do. But I think from the CEOs or the board vantage, you also wanna get someone who is able to. Connect these dots and able to see how that function fits in the overarching business.
So the questions I like being asked in that sort of interview are, they're primarily business related questions that are connected to engineering reality. Okay. Because that, I think is gonna help the CEO, the CFO interviewing that person, make a judgment on, I wanna get someone who is pretty competent in running that function, but also helps me in running the business.
Okay. I'll give you some examples of, you know, business related [00:09:00] questions that have, and responses that are grounded in engineer launching a new product. Okay. How do you think of that problem that is squarely and engineering and product problem, but touches all parts of the organization, of the business?
How do you solve that particular problem? What data do you need? Or how do you go, uh, how do you go about solving a particular problem? You know, maybe we're in a particular vertical today and we're going to launch a new product in an adjacent, adjacent market. What do you do? How do you address this particular one?
Dealing with a disgruntled customer or helping sales win a particular deal? How do you reason about all of these things? And you'll end up finding that, you know, if I'm answering these questions, there's gonna be some element of my answer that's gonna be engineering specific, but it still is a business problem.
It's not, you know, my commits gonna go, but from X to one, I'm gonna hire five engineers and we're gonna build stuff. Okay, that's, that's cute. But what is the big problem we're trying to solve?
Matt: That's great. Kind of after the interview, let's talk about the [00:10:00] C-E-O-V-P-E relationship and what is one thing a SaaS, CEO thinks they understand about engineering, but ultimately causes friction or wasted resources?
Karim: This is a lesson I learned early on in, in my career, pretty much, maybe not so much nowadays, every time I'm hired, usually in that position. At the point where the company has sufficient confidence that we have product market fit. You have a small team, maybe it's a handful, and you've built a kernel of the product and we've taken, you've tested it, you take it out to market, and you've got strong signal that there's a there.
There. We want to scale the business. We wanna hire A CRO, we're gonna hire more reps, and we wanna hire a lot more engineers. The fallacy or the illusion is, I've got 10 engineers today. We build 10 widgets in a year. Kain go and hire 20. We're gonna build 20 in a year. The answer is no. You're gonna build 10.
The analogy I give them is, you're gonna double the engineering capacity [00:11:00] and your productivity is actually, if you're lucky, remain as is. The reason is it takes time and energy and effort to find people to, to train them, to onboard 'em, to get them to be productive, and that takes that capacity that you are assuming is going to deliver product into something else.
And that's one of the big frictions early on that I had with CEOs who were like, Hey, you've doubled the team and we're still operating at the same, you know, speed or throughput or velocity as six months ago. Well, yes, it just like, you know, your C O's gonna tell you, I hire a rep today, they're gonna be fully productive.
In, I don't know, nine months, six months, whatever your wrap looks like. It's the same thing that happens in, in engineer. And that's one thing that can result in some friction because from a board level or from a CEO level, they're looking at the span go up, the headcount go up, but I'm not seeing anything yet.
The, the output is flat. How do I get, how do I move that, bend that curve? And that takes a little bit of [00:12:00] time.
Matt: And how do you manage that? Do you like, do you. Really emphasize that, preview it. Like how do you kind of deescalate it? I explain it. Question there,
Karim: I explain it now I don't have strong, this isn't science like, it's not like, you know, it's gonna take nine months or six months or a year.
It's not particularly scientific, but you explain what's about to happen to the organization. You want me to go double the T? Great. Well, that's gonna take some capacity from the existing team. Guess what? They're gonna be interviewing. They're gonna be onboarding them. The new person doesn't know the the new code base.
It's gonna take some time for them to gel. And when you parlay that story and maybe you connect to something that they're more familiar with, the sales analogy is a really good analogy where if you look at any sales model, there's a, there's a wrap. I hire a rep. A rep today. They're fully productive. Or they get to full quota, I don't know, nine months, 12, 12 months, depending on your ramp.
Same sort [00:13:00] of thing. The challenge with engineering is, which is ironic because of the, the name engineering, is we don't have our data to show you what that ramp looks like. So it, it, it is a little bit of storytelling and also also making sure that, that, that on me, I'm able to show the results as quickly as possible, but also by myself and routine time.
For us to be able to find those right people, hire them, onboard 'em, and become more productive. Because the very last thing you know I would want to do to the business is just go out and hire people that have a bad fit for the sake of just doubling the team and saying, mission accomplished. So you wanna be able to be responsible and be upfront and transparent that, hey, this isn't a linear curve.
This is gonna take some time, but you benefit once we are on the other side of it.
Matt: That's great. And then just at a higher level, what's some general advice you would give first time VP of engineering or someone who's looking at that role, aspiring to that role?
Karim: [00:14:00] Yeah, that's a very good question. If I look back at my career or my first time in that role.
The, one of my biggest learnings was I just assumed that this role was, you know, individual contributor plus, plus. It's not, it's a completely different role. You have to forget and shed all the skills and all, um, the strengths that you developed as an individual contributor and move to like, it's a different position.
It's like you moving from a player on a sports team and becoming the coach.
Matt: Okay.
Karim: And that, that takes a little bit of a mindset change because inevitably to be your core strength has been in, I, you know, I've been, I was a very competent engineer, and that's why I didn't reward it to become the head of engineering.
So I have to be involved in everything in engineering. So you have to be able to figure it out that you are now operating at a different elevation. And the problems you're trying to solve aren't strictly engineering problems. Yes, yes, they are. [00:15:00] And yes, you have to, to be involved in, in understanding the day-to-day operations in your team, but you're no longer a software engineer.
Okay. If you're going to take that role, assuming that you are going to be, you know, the best and most prolific software engineer on, on the team, you're gonna fail. So that takes a little bit of reflection, a little bit of trying to, to work with your, your peers and your leadership team on what, what are the problems that you want to be, that the business seeks to solve, and then that becomes your job.
Okay, as opposed to hands on keyboard typing code or working with CLO to type code, I'm gonna be doing a hundred times of that. The job changes. It's a very different job.
Matt: That's fascinating and, and great advice. And I think now we'll go into AI themes, which is obviously the topic of the day. Yeah. One of your recent, recent posts was titled AI Made Us Faster.
That's not the same as better. And then there was one [00:16:00] particular quote within, I liked ai. Now meaningfully accelerates parts of the job, but it doesn't absolve us of ownership. Used deliberately. It makes good engineers more effective, used carelessly, it amplifies debt. And one question I had in there is kind of at an engineering leadership level, how do you stay close enough to the code, to the processes so you get what you inspect in a very, very high volume environment where debt can amplify.
Karim: Amazing question, and the reason I'm laughing is the answer changes by the week. When I first wrote that piece, I think it was August, the world was very different than what it was today. So I'll give you two answers back then. AI models were. Especially with as coding tools like Clot Code were pretty amazing and they still are, but [00:17:00] I don't think it hit me until earlier this year that the human in the loop, which is how we were using these tools.
Like I'd use Claude, they write a bit of code, I'd look at the code, I go like, write some more. I'd look at the code and write some more. So there's always been that human in the loop, like we'd inspect and see like, Hey, what did this did this black box do? And is, is that code correct? Is it a code that I trust?
And if it is, then I'm gonna push it out. So you always had AI building something, human looks at it. At least this is the way I was. We were using it and push it out the door. Now we're very quickly reaching the point where. That is no longer going to be how software is, is built in a very near future. And there are parts of the software world that are operating this way where basically you've got the AI spin it lock, stock and battle with no human looking at it.
And that's a, that's a very, very different problem than a human inspecting it. So you're gonna have to solve that with software and models somehow, [00:18:00] eventually being meaning the entire loop from ideation to creation to verification. Is done with Handsfree, with no human in your loop. How and. I don't exactly know to be honest, but I know that this is how software will be developed fairly, fairly soon.
Philanthropic just released their latest model four six, I think it was last week, and in doing so, they also published a post about building a compiler, which is a fairly sophisticated piece of software entirely autonomously with a bunch of clot codes at a total cost of $20,000. And this is a fairly complex piece of software that was written with no human intervention.
Zero human in the loop. So that's just the beginning. And I think this is going to accelerate and as an industry, we're gonna have to answer this question that you asked me. And the answer can't be that a human is gonna look at the code because the volume of code that these agents are going to write is going to far surpass my ability to comprehend it.
Or am I related [00:19:00] to reason about it? And that's one secondary is if you're doing it this way, so you've got. You know, moving at the speed of GPUs, but, but throttled by the speed of human, I mean the human has to reviewers, you're ultimately moving at the speed of human you wanna be moving at the speed of GPUs
Matt: and just kind of real time and taking a grand Parently perspective, right?
Or you just hear this and don't know a lot of the reality, but you just zoom far out. You might respond like. What in the world are you doing with that much code that no one is looking at? I guess like is that advisable?
Karim: It is. If you're able to verify it in Sy systematically, if you're going to be verifying it by a human looking at it, then you're still going to be moving at the speed of humor.
Okay. Which is fine. I mean, this is how software has been developed since, uh, the birth of the industry. [00:20:00] But I think we're at this inflection point now where we're all coming to the realization that this is no longer going to be the case, and therefore we're gonna have to solve that problem. Like we can't just toss it over the wall and say, it's too much.
I can't comprehend it. No one's able to comprehend, comprehend it, so you know how it goes. And if it works, great. If it doesn't, you know, we'll take that slot back. We're gonna have to solve it with engineering. Okay. And new tools and a new software development life cycle. And, you know, we're sort of like living it now and trying to figure out how to answer these questions, but I can tell you definitively or is too stronger work that it cannot be a human looking at it.
What I don't know yet. And I think as an industry, we're in the middle of trying to, to figure out what that, that looks like, what that future looks like.
Matt: That's an excellent answer. And, and there are trade offs and I, I think, you know, reading between the lines or reading explicitly, it's. It's the speed and the volume that matters that, you know, some of these human-centered, [00:21:00] QA centered processes might leave you in kind of the equivalent of 1995 and we're entering into 20, 20 40, right?
And the speed is volume and, you know, your human concerns, human review and, and that these models have gotten so good, so fast. That this concern might be foolish.
Karim: It's a valid concern. Absolutely. I don't think it will be solved by a human and group. I think it'll be solved, obviously by human ingenuity, but ultimately it's machines that are going to be responsible for well building it and verifying it, the full cycle.
Matt: Fascinating. This is, this is fascinating and
Karim: it
Matt: is. I think you know that transitions well into some of your more recent writing and preparing for a world where humans don't write code is the article, and we'll have this in the show notes. Really good, strong piece, and I like this analogy. The programmer's craft will follow the same arc [00:22:00] as animation in place of writing every line.
Much of the work will involve guiding and integrating what me machines produce, defining the architecture. Reviewing generated components, making judgements on trade-offs and user experience, and ensuring the final system works as intended. At at the engineering management level, what does that mean? Like what skills change?
What gets more important? What are some elaborations there?
Karim: Also a very good question. Thank you. And one that is difficult to answer, I think. The, the job definitely evolves and changes and morphs into more than one job. Uh, and I think this, this, this answer might apply to anyone who is in the product development machine.
Whether you're, you know, strictly in product management, you might be in design, or it might be in engineering management. I think in, in a world of ai, you know, [00:23:00] building those three roles get mushed into one. Okay, where you can no longer say, I think you are strictly in the engineering domain, and you are not gonna be as close as possible to your customers in the market, or have at least some ability to design software like pixels because the tools you have at your disposal today allow you to do all of that.
Matt: Okay?
Karim: You and I can come up with a prototype. Soup to nuts with beautiful functional software and put it out to market. At least it tests a hypothesis in, in hours, if not minutes. It depends on, on obviously the scope of what we're doing. So the cycle time is shrinking from ideation to putting something out to market and, and testing a hypothesis.
So I think the, the job then becomes how do we, how we run as many experiments as possible to validate or invalidate hypotheses very, very rapidly. And that's a combination of engineering product [00:24:00] and design. Because what you're trying to do is, I got this amazing machine that can build anything, okay? And it can build it very, very quickly.
And it, you know, the, the cost per token is plummeting as well, so it can build it very cheaply. How do I take advantage of that? And that's going to be a function of trying to figure out what is the right problem to solve with the right solution. I think that's the, the, the role, the role changes from strictly managing a team of software developers and making sure that the product out the door is correct and, and on time and whatnot to orchestrating.
Experiments. I, again, you're asking questions. Very difficult to answer in today. In present, there's an amazing question. It's just hard to, to, to,
Matt: oh, I, and I know that feel. I've come on podcasts and you get, you know, what is the, and it's just like someone could talk for.
Karim: Yeah,
Matt: 82 hours and would barely scratch the surface.
And there's no way I can, I can know that. And to tie it a little to your question though, and it's, you know, we're building [00:25:00] some things internally technically at cloud ratings, which has been fascinating. And then just in the course of this episode, right, I'm hearing about volume, these amazing tools. And in that last answer, all of the experimentation.
And it's a bit counterintuitive because you see all of the, you know, the trend lines around head count and everything should get more efficient. But when you think about how amazing these resources are and like the sheer, insane volume of opportunities it creates for those experiments and the payoffs to those experiments, you might think like, Hey, instead of having one VP of engineering, I should have five.
And I should have five product managers instead of one. And we should just be going insane. And I've been considering that for us, right, where we should actually radically expand the human side to even tap into the insane potential of [00:26:00] ai. Am I crazy?
Karim: And you're not so long as, I mean it goes, but the missing blink is still.
But if you're gonna be operating at the speed of human, then yes, you need to hire more humans. But if it's fully your top, if we truly believe that at some point this is going to be pretty much autonomous, then this other problems that we need to solve because we're not ready for that yet as an industry.
But you, your, your intuition's a hundred percent correct. I can run far more experiments and build a lot more today. That I could ever build in, in the history of mankind or humankind. Right? So therefore, your S correction, you know, albin everything, which, which I think is also why you start seeing the overreaction in markets of this is the end of software because this is you and I.
It can build any piece of software now, right? We can just build Snowflake and Databricks while we're on this podcast.
Matt: That's good to hear. And again, apologies for the impossible questions. I
Karim: the great questions. Matt,
Matt: you're, you're, you're in the hot seat and we'll transition to a [00:27:00] QR question, right?
Because the 10 x engineer, right? Yeah. As we move into this era, does the, yeah. Floor get raised for the mediocre engineer, or does the ceiling dramatically go up that such that the 10 x engineer becomes a thousand x?
Karim: I think both happen. Okay. And I think, at least in my thinking, both happen, and the analogy I, I, I use is I think of like a craft, a carpenter.
Matt: Okay.
Karim: And you're not using power. Power tools have not been invented yet. Okay. And you're really good at your craft. And I come along and hand you power tools. Okay? And I give you Matt, you know, the expert carpenter power tools, and I give an novice carpenter power tools. Who do you think is gonna be the better carpenter?
It's still you. Okay. I think the same applies with, you know, a very, you know, a prolific software engineer plus AI is. You know, a hundred x better than [00:28:00] a prolific engineer without AI and a a hundred times better than a normal engineer with, with AI ethic. The reason is expedience taste. You've seen a breadth of problems.
You're able to articulate and constrain how this, the domain, the AI has to operate in, and these things come from. Well miles mileage, like you've seen many, many, many problems and you're able to, to work with a, an immensely powerful tool like AI to guide it to, which is still an important thing to do, to guide it, to deliver, you know, the correct product that you want.
So I don't think we all become prolific engineers just because we're able to prompt tools like Claude. And Codex. I still think that experience, that intuition, that taste is going to be immensely important in a world where, you know, pretty much anyone can build software.
Matt: That's a, I think, an excellent answer and, and we'll move away from ai, but.
AI is ever present into security. Yeah. [00:29:00] And at the high level, what is the biggest security risk? Security weakness, or what are people missing kind of on the n non-technical side investor side? In terms of this rush to incorporate AI into everything or involving AI in, in so many functions, what are some security risks that are non-obvious and how concerned should people be?
Kind of like one to 10, how freaked out or not about security and ai should we be.
Karim: I don't think we should be freaked out, but I think we should be concerned, and I think we are. The risks, there's, there's a multitude of, and the obvious one is these systems are non-deterministic. The canonical software that you and I are used to you and I can read the code and figure out like, okay, is this the domain or the constraints in which this piece of software is going to operate in?
If I give you a database. It will not go and, I don't know, [00:30:00] update HR records. It's just gonna store data and that's the end of it. Okay. And, you know, give you the ability to retrieve data. So you've got like this box that the software operates in with AI or with, you know, you've got now an LLM embedded in a piece of software.
It is non-deterministic. Okay. It can perform actions that you and I are looking at that code. You cannot envision, okay? It can go and update that HR records so it can perform these non-deterministic actions, which are tooling and security, or in a software development life cycle is not equipped to detect, let alone react.
So we're now all of a sudden have to deal with a non-deterministic piece of software embedded in our workflow, embedded in the enterprise. Okay? And you might have heard in the news of someone playing with an AI agent benefit either the production database. Okay, that's gonna happen, okay. Where we have these agents and you know, all of a sudden they perform an action that was [00:31:00] not intended and well, it caused some catastrophic failure.
I think we're gonna see more and more of that as AI adoption increases. The other thing that is also not going to work is it, it goes back to the, it's the same problem that we talked about with a human in the loop in in software engineering. If I bring in your enterprise an autonomous agent that is able to perform some job function, okay, and I'm gonna shackle it by every time it's gonna do something I'm gonna ask a human well, that becomes again, your GPU and the speed of human.
So you're moving at the speed of human. You're missing the point entirely by getting these autonomous agents. So if you're bringing truly autonomous agents in your enterprise, which is going to happen, then you need to solve the security problem. Without a human in the loop, you still need a human loop for sure, but not for every action.
You're gonna have to solve it with a different security toolkit, not one that we have today, because the toolkits [00:32:00] that we have today are things like policies and R backs, and aacs mat isn't allowed to do A, B, and C, but not D. It's very difficult to articulate what an agent should be able to do with the same primitives that we've applied to humans so that that toolkit.
It's thrown out of the window and it's going to have to be replaced with something else. Something that we as an industry are, you know, thinking of trying to, to build and to innovate in. But I think to your point, the security approach that we have today is going to change in response to having these autonomous agents running within the enterprise and accessing all sorts of different systems in the enterprise.
And these are very, very important critical systems. Databases, systems of records, HR systems, financial systems, you know, on and off.
Matt: And to get a little bit more specific, because for, for folks like me, a lot of AI and the security component is, is still very theoretical, right? We understand volume, [00:33:00] non-deterministic kind of unsupervised, but seeing the solutions to a generalized thing might be hard and some of your writing has been quite helpful there.
And one of the themes or solutions was around kind of employee rebranding. You've argued that we need to stop treating bots like scripts and start treating them like employees. Is that part of that framework you're discussing?
Karim: Yeah. My, uh, co-founder and CTO here at Strong DM uses a, an analogy where you have to, to treat AI as a coworker.
So just by way of context, we actually have some pockets of our team here using like, we've got canonical traditional software engineering people type on the keyboard and commit code, and we also have autonomous. So we have both. Because we're trying to learn and slowly adapt and understand what that world looks, looks like.
So we've got agents that are committing and writing code. And if you treat that agent as [00:34:00] as a bot, that's one thing. If you start treating it as a coworker, you're gonna build different intuition, different tools, and different ways of interacting with that AI than you would with a Slack bot. It's no longer a Slack bot.
It's. I think it's not a living thing, but it's the thing that you interact with. If it makes a mistake, like if you and I are working on a team together and I make a mistake, you're going to interact with me. Maybe you're gonna text me, maybe you're gonna email me, you're gonna pick up the phone and call me.
So even the way we communicate with these agents is very, quote unquote, human centric. We can communicate with these agents in all of these modalities. So I think that analogy is correct. And is one that has been very helpful in our internal understanding and using of AI agents at least, and, and, and software and building software.
I think when it comes to security, you also wanna deal with, with act in, with some nuance and, and the nuances with a human. I can still constrain what it does this, this [00:35:00] sort of like a superhuman being that can do things that, you know, we didn't imagine that they would do. Because they're non-deterministic.
Whereas with humans, I can, you know, you're in finance, you should only touch, you know, your ERP system and that's it. But for these other agents, they might perform actions that we did not imagine them to do,
Matt: and then just via, you guys have been doing it as strong VM. Kind of treating the bots like employee, like what are some learning, like do you actually give them names?
Karim: We do,
Matt: like add, add them to the slack? Like what are some, like for someone who's, who's never tried this, like what does it mean to kind of make them a teammate or an employee?
Karim: Absolutely. They have names. They're on Slack. I can slack them, I can text them. I can have them on Zoom. I can email them. They can email me back.
So they have this identity, which is a loaded term because I work in an identity security, but they do have this identity and this persona. I can check in and see what they're doing. I can give them feedback. I can see all the [00:36:00] actions and trails that they're doing to try and understand how they're thinking.
And yeah, I can communicate with them just like I can communicate with you. I can talk to them. I can pick up the phone and call Jen, which is one of our agents. I can email her, I guess, or it, or they, I don't know. But yeah, it's like a, a core
Matt: Are some of these rituals for you or for the agent
Karim: for both
Matt: is that's what I'm thinking is, is like just for me, if I were to, you know, all of a sudden strong G DM made the mistake of hiring me, and I've got these agents.
Or we considered implementing them here at Cloud ratings. I think some of this would be for me, right? Yeah. Just to, to kind of feel like maybe I developed some level of trust or that there's a bit of magical thinking that my feedback like. Helps, you know, like, and that's a, a question, right? Is does any of this feedback or anything like Oh, absolutely it does.
Karim: Yeah. So the feedback anyway, it does [00:37:00] help. The feedback is if you think of, have you used CLO coworker or CLO at all?
Matt: Just a bit. I'm, uh, okay.
Karim: So one of the superpowers that these agents have is they're able to use tools. Just like primitive human beings started to use tools and they became, you know, got a a lot more advanced.
These agents are able to use tools, simple tools, but they can use them much quicker than the one I can, and they're able to compound their usage of those tools, but sometimes they'll take the wrong tool. Okay. And the feedback we give them is, ironically, this feedback can also happen from a human as well as another agent that is looking at what the agent's doing is like, actually next time don't use this tool, use this other tool, and we'll incorporate that into the learnings of the agents.
So the next time they perform a task that's similar to the task that we just gave them, they will not use that. They will use a different tool or adapt. Okay, so we're [00:38:00] trying to, to guide these agents to be able to pick the right tool and perform the right set of actions and to correct their mistakes if they're do, if they're doing something similar in the future.
We've open sourced quite a bit of that actually last week. We call it the Strong M Software Factory. We've contributed a lot of that out into the world.
Matt: Fascinating. And and where can people find that?
Karim: I'll post it for you. Y
Matt: yeah, we'll, we'll add it to the show notes. It's, that's, that's fascinating. I think a lot of people would be inspired by this and just.
To see what you are doing. It, it all sounds very impressive and and worth exploring. So again, we'll put that in the show notes. Look, we've had some phenomenal answers from you and I can imagine everyone's gonna wanna stay current with your blog and, and this type of insight. It's honestly has been main thing and.
Where can people find you and anything else you wanna promote or, or share about yourself?
Karim: I write, I've got, you know, we, we connected on Substack. I write semi irregularly, so anytime I've got something [00:39:00] that's. You know, I'm thinking about, I'll, I'll push it out there mostly for my own education. So my substack is cumulative with two Ms, actually the aisle.
And you can find me there, you can find me on LinkedIn. I am a lurker on Twitter or X, I don't contribute much there, but I like consuming content there. But I think be between my blog, LinkedIn is where you'll find me and if you want to reach me out through my blog as well, email, or I'm not happy to, to connect on one-on-one as well.
Matt: Awesome. Well, Logan, this has been awesome. I really appreciate you making the time and the phenomenal answers.
Karim: Thank you. This was a lot of fun and your timing was impeccable with this ai. It really truly is an incredible time to be alive.
