Evolving Enterprise Software: Interview with Leap CEO Brian Stack

Today’s interview features Brian Stack, a serial entrepreneur and visionary technologist who is reinventing enterprise software development. As the CEO and founder of Leap, Brian is leading the charge in enabling businesses to build flexible, evolvable applications without the constraints of traditional coding.

I’m thrilled to pick Brian’s brain on the limitations he saw with current software practices. With decades of experience pioneering innovative technologies, securing venture capital, and leading successful startups, Brian provides unique insights into how Leap’s model-driven approach is revolutionizing software flexibility.

In this interview, we’ll explore Brian’s journey as an entrepreneur, the inspiration behind Leap, and how the company is transforming application development for modern businesses that need to adapt quickly. Brian is an industry veteran with a prolific record of patents, and his life’s work culminates in the groundbreaking platform created at Leap.

Stay tuned as we dive into Brian’s vision for the future of enterprise software and how Leap enables organizations to keep pace with rapid change. The potential for evolvable, flexible applications without cumbersome coding is truly exciting, and Brian provides an insider perspective drawn from the frontlines of software innovation.

Brian, can you introduce yourself? Give me some of your credentials?

I’m currently CEO of Leap Technology, and have started multiple companies in the past, one taken public (NASDAQ). Another is now part of a large drug company. I have secured multiple rounds of VC and PE money and have five US patents and a couple of foreign patents, all focused on the ability to build software systems without traditional computer programs.

Leap Technology has been available for less than 1 year. However, it has been the result of 20 years of AI R&D and 5 years of product development. Both before and during this 25-year period, I was granted patents in AI related sciences and for the creation of AI supporting technologies.

What problems are you trying to solve with Leap?

The mission has always been to find a software mechanism that is better than the current unnatural and inefficient one we have today to deliver computing power. The computer is the most flexible technology that humans have ever created but is effectively turned into concrete by adding software – which is not inherently flexible. There is something fundamentally wrong about that. We can see from nature that there is infinite flexibility. Turing talked about that this isn’t the way to develop computer systems.

Leap bridges the chasm between business needs and getting a computer to behave in a specific way to perform a solution to that need. Recognizing that program code is the source of inflexibility, Leap removes the need for it with model-driven software that can evolve.

What do you think are the major drawbacks of current software?

There must be a better way than trying to predict in advance every single condition that the system will need to accommodate and define how it will need to behave under those circumstances, when the future isn’t clearly known. The costs and the process of delivering software with this approach are prohibitive. So, we end up with trade-offs. The result is that by the time the software is built, it addresses only a percentage of the business’ needs and that’s accepted as the best you can do.

What technology have you created and does this solve today’s software issues?

Everything we’ve built since the ’90s would now be recognized as AI. With the likes of ChatGPT coming onto the market and essentially changing the entire narrative around AI very rapidly.

Like OpenAI and other LLM technologies which use the model to generate creative behavior. Leap used the model to create computer behavior in place of software. This approach eliminates technical debt and allows applications and systems to evolve at the speed of business without any limitations.

How does a model driven technology that can evolve applications without technical debt work?

By moving the work out of the program code and into the model, in the same way that GenAI does, we eliminate the need for program code. The created model is easy to change, therefore, to change the system we only need to change the design.

Is Leap like the current GenAI solutions? Is it compatible with the Gen AI?

Leap is in the same evolution family as GenAI, both use a model to do the work. The goal of GenAI LLM is to create original work or analyze information to come to a conclusion. Leap’s goal is to use the model to consistently reproduce behavior, that would normally be produced by application software. It is completely compatible with GenAI.

Do you think an organization can use AI to replace complete systems?

As much as AI has this promise of completely replacing human beings and taking over the world, it’s not going to be able to do it on its own. AI is not broad or capable by itself. It needs additional technologies to augment what it lacks.

With Leap and its predecessor technologies we have been building the missing technologies. The idea of doing business as usual surrounding a new technology is always the approach people take because that’s all they know. But just bolting a motor on a cart that used to be drawn by a horse does not make a car. So new things have to be developed and new attitudes and manufacturing techniques and so forth. Everything surrounding it’s got to change, including the business models that deliver whatever it is that is being replaced.

Yes, I think that AI can replace complete systems in organizations and at Leap we have built the technologies to complement AI to so this at scales for the enterprise.

What is your view on the evolution of AI and society?

We have seen resistance to every major technological revolution. From Luddites during the Industrial Revolution who went around smashing looms because they thought it was going to take everyone’s job. History has shown it creates many new jobs, just like modern AI will. We now need prompt engineers. But the idea that a new technology, no matter how powerful, is by itself a complete and total solution, has always been the way people react and has always proved wrong.

Taking the invention of the motor, which could power stuff and make all kinds of things go. It is now responsible for everything from transporting across the globe to moving your animals across your field. To enable this, whole different architectures have been developed such as transmissions and differentials are needed. Other technologies are needed to make it work. And if you don’t have those other technologies, it doesn’t operate efficiently or effectively. And you can see that in the way AI is being adopted.

I’m not worried about people being concerned about losing their jobs because every technical revolution has had the exact same fear. And every single technical revolution has offset some jobs, but it’s always offset lower-paying jobs and created higher-paying ones. It’s always driven society upwards so that everybody’s quality of life has improved.

What does this mean for all the programmers out there, when systems can evolve, leverage AI and organizations run model-driven software?

When programmers look at their career, many don’t believe they’re going to be programming say, five years out. They see promotions to architects, designers or moving into management. Isn’t that a good thing if we have a tool and approach to get them there faster? If I can get you into management faster, it eliminates the need for what you were doing, but it increases the value of what you can deliver.

With model-driven software, we still need the ability translate the domain expertise into what should be delivered. It’s about getting value out of that computer system that optimizes the value the user wants or needs.  That skill is not only valuable, but also in surprisingly short supply.  There is a huge skill in being able to truly understand business needs, the actual problem being solved and defining the ramifications of those problems, whether downstream issues or all the different values that we can extract from it.  And if you can do that with a tool like Leap, this gives you the ability to deliver that value quickly, then how indispensable to business do you become?

Organizations today have significant development backlogs and technical debt, which is so monumental that we have no way of throwing enough bodies at it to ever solve it. It’s getting worse year by year. Meanwhile, the things we’re busy automating are also expanding. So, we’re going to need a lot of people who know how to make systems work. And there’s a lot of systems out there that aren’t being built because of limits in budgets and people.

But wouldn’t that same person already be indispensable today?

Well, the people who can do that are, yes, but you need more of them. Historically the best people to fill this gap are those that come up through the ranks as programmers. Model-driven software provides the ability to generate value through a different set of skills than someone who learned computer programming as a skill set.

We are always going to need people that understand how a system is supposed to work in the context of the business. That has nothing to do with the code. It’s logical that people who have seen good working and better-designed systems, as well as ones that weren’t, can replicate good designs based on new needs that they encounter. Leap technology allows you to accelerate that curve.

So, if we can accelerate more people out of just writing code and into solving the problems that computers are supposed to be solving, we are going to increase everybody’s value across the board.

Will generative AI bridge the software and technical debt gap?

Using generative AI to bridge that gap comes with its own very large set of drawbacks, specifically that it’s not really a technology designed for that purpose. Even if you made it perform that purpose, all you’re doing is taking the current status quo, which is not working, and expanding it across a larger population of opportunity to build technical debt. Again, it’s just doing the same thing the way we’ve always done it but doing it faster.

We get to the same place where we don’t have what we need the way we need it. But just getting to where we would have gotten to in the past faster and spending a lot of money to get there but not gaining any other advantage when you have a technology that could be leveraged into a greater advantage seems insane.

How do you see Leap Technology evolving?

We have a clear roadmap with at least another 19 designs on the drawing board right now.  We want to be able to adopt AI throughout the entire enterprise, doing all kinds of stuff other than just writing software. We’re going to need technology that helps implement it. So, the stuff we’re building is going to be a universal mechanism to effectively create the synergy that AI needs to become something that’s a dominant technology.

What’s your advice to CEO, CIO, CTO of a major enterprise companies, looking to adopt AI?

Firstly, there are things you’re going to have to do the same way as your competition simply because you have got to compete. It may not be the most effective and efficient way of doing things. We are seeing this happening now, which is this sort of patch approach, applying AI wherever you can.

To adopt the technology, organizations need a strategy that routes the course from today to tomorrow and creates that sustainable competitive advantage.  And that’s going to require implementing processes, or at least technologies that can implement the processes, of delivering the solutions that span varying problems and supports larger sections of the business.

The challenge for leaders is how will you compete with a company that can successfully replace all of their core systems with a single AI? There’s got to be massive advantages there that we don’t even know. But to do that, they’re going to need to implement supporting technologies to create the synergy necessary to take advantage of the AI. Kind of like you can’t just take a motor and build a car. You need the rest of the car.

AI needs to be implemented more synergistically so that they can really capitalize on what it is. And we need systems in place that can deliver synergy, so incorporate the AI, and to do that, we need systems built the way we’re describing.

Organizations are probably dealing with five or six domain-specific generative AIs and at least one or two general generative Ais. Introducing another AI into the mix, does this help?

Well, for starters, let’s start wrangling one of the ones you’ve got and wrap this technology around it because this is the synergy– we’re not saying we should put another AI in place that does this. What we’re saying is we should put these synergistic technologies to help you wrangle the AI you’ve got. And one of the side effects is that you can wrangle multiple AIs with the same synergistic drivers so it’s possible to expand what they can do cooperatively using the same techniques and the same system. Remember, it’s not just generating application systems or replacements for software.

Can you give an example where the innovation has needed additional parts to be transformative?

Tesla learned that energy can be transferred, and he learned that he can literally move power between two independent poles. Well, he had invented radio, but he didn’t see the value as much in radio, unlike Marconi. And once you have a radio, then you have TV. Once you have TV and communications in general, we start having things like the internet. All of that’s because we learned we can transmit power. And do you see a lot of power transmission running flying airplanes today, which is what he wanted to do with it? So, the least valuable thing that it brought to the table is why it was invented. But today, it’s got value well beyond anybody could possibly imagine.

If you think about the computer chip that runs computers today, well, the invention of the transistor is the same as the invention of the chip. They’re the same thing. There’s just more stuff on the silicon. The guy that invented the transistor was looking to replace vacuum tubes so that he could create amplifiers that drew less power. Well, computer chips have an enormous value in terms of the ability to transmit voice over a wire.

Do we have to wait for these complementary technologies to leverage AI in organizations?

The good news is a lot of the technologies that we’ve evolved can be applied to perform the functions to enable scaling of AI in organizations. Our years of R&D allows taking a model and processing it as if it were software– think of the model itself as the programming language. There is no middleman, as it were. And the behavior, it becomes on tap. With our deterministic approach, you can capture and lasso the AI, and start getting predictable results. The result can be processed in real-time and effectively pretend to be the software in real-time, then natively, you have the ability, since it’s a model, to close a loop and get it to evolve.

There is no limit based on code or a from using a preconceived library of code or what somebody programmed it to do upfront. It literally is adapting to permutations on the fly. It could also present the option of incorporating the generative AI as a solution to part of the application to help the user.

What are the key technologies that organizations need to be familiar with?

First of these is generative AI, which is the creative engine of some kind that can generate information to me based on a prompt. The promise of modern generative AI to accomplish that task is good. It is very likely that that technology will evolve to the point where it’s reliably able to do that given a deterministic driver of some sort that’s capable of lassoing it up front.

The next technology has to be capable of doing representation, we have one that’s molecular. The model is effectively a representation of the molecular structure, very much like we can see in the universe, and it runs that model as if it were software. We actually have all of these technologies working today.

And we’re capable of delivering something that looks very much like what the future should look like. It’s not as evolved as the 15-year-from-now version where you just walk up to it and say, ” I want this kind of system to run my business.” But the turnaround time and the cycle to getting there is shockingly short. We just did a demo where we got a minimum viable system working in a single day with nothing more than a couple of phrase words up front to do something we have no domain knowledge in at all.

Could your technology potentially evolve into a Generative Purpose Technology (GPT) that becomes indispensable for businesses and enterprises, regardless of their initial skepticism?

This isn’t merely about innovation; artificial intelligence (AI) has been around for decades, but what’s changing is its accessibility, awareness, and acceptance. It has transitioned from being the domain of a few innovators to being embraced by a wider audience.

While predictive algorithms have existed for some time, the significant shift occurred when they were integrated into large language models and made widely available. This required substantial investment, and now, we’re beyond the stage of early adopters; everyone is rushing to incorporate AI without necessarily questioning its limitations or unmet promises.

This rapid expansion can create an illusion of perpetual progress, but history shows that widespread adoption takes time. Just as with television, personal computers, and the internet, there’s a natural progression from initial development to mainstream integration. For AI to be fully embraced, there must be a recognized need or competitive pressure driving its adoption.

However, despite its potential, there’s a gap between what AI promises and what it delivers. This disconnect is often noticed by innovators rather than followers. Yet, the fact that AI is being adopted indicates its value, particularly in its ability to rapidly develop and evolve systems. Nonetheless, there will always be resistance to change until external forces compel adaptation, highlighting the inherent nature of human behavior. Can model-driven software become a GPT, yes it can.

What traits and characteristics define the persona of innovative leaders, particularly in their capacity to embrace new technologies?

Some leaders are innovators by happenstance where they just didn’t have a choice. Others are truly innovative and see an opportunity. The guy who built Chrysler originally fixed up and started getting the Buick assembly line to work in a more modern, mass-produced way. And he made money to buy out competitors. Most people don’t know this, but early last century, there were over 3,000 automobile manufacturers in the US.

A great bulk of those leaders were not innovators. They saw the opportunity of what a car was, and it was a marketing opportunity. They went out and started building it. But to adopt full mass production wasn’t within the scope of many of them. Usually just a few leaders rise to the surface, and everyone else goes extinct. We see the same with companies manufacturing computers. And we’re seeing it today with software consultancies. They’re using programmers and trying to augment them with AI. What are they going to do in a post-AI world where the model itself changes? Well, there’s got to be a couple of innovators out there that see that coming and want to find that new business model and probably wind up acquiring any of the other ones that are strong enough to survive. There is a first-mover advantage to those types of people.

From the Leap technology, are there any metrics or specifics from an ROI perspective that I should know about?

I really hesitate to make any claims and I’ve learned to shy away from that because they are unbelievable. And even people who experience them still have trouble believing them.

The mechanisms that we’re using today are so spectacularly inefficient that just doing it in a more natural way brings it to the norm, and the difference is so radical that it appears very impressive. We should be more impressed by the waste that we’re getting rid of rather than just the advantage that we’re gaining.

AI and software companies are making major claims, but we need to look at Software failure rates. The only thing that’s appeared to improve failure rates is moving to a more agile approach to developing, so effectively breaking the failures into smaller, more manageable chunks that you can easily get rid of or less expensively get rid of instead of having them add up into a complete project failure. If you’re going to abandon it, you can abandon it earlier. There have been improvements along those lines. But overall production has also come up, giving us what looks like a reduced failure rate that isn’t always truly reduced.

What would be some success stories for Leap technology?

We used the technology to build a banking system which evolved rapidly and had significant scope creep to the point we developed and released about 150 versions of the system at enterprise-grade core capability.

Using current approaches, it is difficult to estimate how long it would take to get 150 versions of the system. Many years at a guess. At the end of the project, there wasn’t anything the users were asking for that wasn’t built for.  Literally, 100% of all requests were implemented. That’s a little bit mind-bending, honestly, because typically speaking, some requests contradict themselves. How does that get handled here? We don’t want to eliminate permutations, right? Am I just saying no to certain things? Or how do you handle that in technology?

There were conflicts in how the technology that we were delivering should behave, and we simply had it build the mutually exclusive capabilities to operate independently depending on which one you wanted.  They’ll put themselves right back where they are, and they will agree that they don’t like it. If you need it to operate the bad way that you don’t like as well as the good way that you would prefer and you must have both of them because you’re going to want to switch at some point, or some of the users are willing to and others aren’t, then have both.

Compare this to with traditional software development practices, you have this whole problem of change management and user adoption. Typically, what happens is someone says, “Oh, the better way to do this is X, and that’s the direction we’re going to take this platform.” And now you must convince people that X is, in fact, better than the way that they’re doing it today, or retrofit. What if hold both states of being.

So you can increase and accelerate adoption, and I suspect that, in a large enough install, you’ll have some users who never adopt. They just stick to the old way, even though they say they hate it because it’s what they know.

 

Subscribe to our Newsletter