FrameKit Logo
An Interview with Our CEO

An Interview with Our CEO

3,120 words, 14 minute read time

In an interview on March 3, 2025, our CEO, Hector Campbell, reflects on FrameKit’s evolution from 3D templates to AI-generated environments, explaining key technical pivots and lessons learned. Now focused on an automated workflow for small e-commerce brands, FrameKit aims to balance accessibility with creative control — lowering costs while keeping human creativity at the centre.

How are you feeling about FrameKit? How do you feel about the progress you’ve made so far with it?

Well, it’s interesting how many times we’ve had to pivot when we’re solving such a simple problem: building a tool with AI that allows e-commerce brands and marketing teams to create product photos. But even with something as simple as that, there are so many different ways that you can approach it, which has meant that we’ve had to actually pivot several times on the tech side.

We’re still figuring out our go-to-market strategy and who the ICP is. But just from the tech perspective, we’ve had to pivot a lot. Initially, FrameKit was going to be 3D template-oriented, which I think is what Omi is going for. But I think that’s kind of fundamentally the wrong approach. I think in many cases, when someone wants to make something, they have a vision for what they want. I don’t think a template is the best way to approach that, especially now that you’ve got AI.

The user can collect photos that they like that are in a similar style to what they want, bundle them all together, put a prompt in there, and get back an image that they’ve created. It’s their idea. I think that model works better than the template approach. The template approach is that you go digging through all of the different categories, you try to find a template that looks somewhat like what you’re after, then you try to customise it to get what you want. I think it’s a backwards approach. The main reason why we didn’t go for that is because of the cost. You’ve got to have all of those templates on hand for customers to use. You’d need to have every kind of template imaginable, and you’d want them to be realistic and high quality.

Realistic, high-quality 3D scenes cost a lot of money and take a lot of time to make. That was a problem that we ran into fairly early on, even just with the first customer. The very first customer that we got, we realised, “Oh, this idea that we had in our heads of 3D templates, it’s not going to work.”

Originally, you described the core premise to me as “a pack of templates to help you get started.” And then I asked if you were going to individually make every template, and you said you’d make one template and use AI to create variations on it. Why is that no longer feasible? How has it evolved past the point where you can just create one template and let AI do the rest?

One of the first customers that we ever had was an international tea brand, and they wanted a new creative campaign, a new series of photos for one of their lines of products. When we created some initial images using this 3D template idea, they came back to us and said, “Actually, no, not quite what we’re after. Can we get some photos of the product in situ, in a busy, metropolitan cafe, with people hustling and bustling?” I’m thinking, “A cafe with people? How are we going to do that? How are we going to make a cafe as a 3D template?” We would have to build an entire realistic cafe. But what if they want to change it? Well, that means we’re going to need to have loads of cafes. And then I thought, “Isn’t this just going to be the same problem that we have, for every conceivable product?” What if it’s a watch brand and they want a photo of their product in a jewellery store? What if it’s a chocolate brand, and they want a photo of it in a candy store? We quickly realised that this wasn’t even remotely feasible. Sure, there are loads of cafe scenes that you can find online, 3D templates, but if they’re good, you have to pay for them, and more importantly, there’s a very finite quantity of good, photorealistic ones, which is what we’re going for.

So, even forgetting the monetary side of things, it’s the quantity. The quantity is a challenge because there’s such a finite quantity of these really good, high-quality 3D assets in the world. That in itself is a bottleneck, because there are millions of e-commerce stores, tens of millions, if not almost a hundred million, which means that we might need millions of 3D scenes. And if each scene costs over a hundred pounds to make, then we’d need to spend almost a billion pounds to get all of the scenes that we might need, which is obviously completely unachievable.

So the pivot that we made was, instead of placing the customer’s product into a pre-built 3D environment, let’s have AI generate an image, and use that as the environment. Because now we don’t need templates. You would simply AI generate the image, the image would look realistic, and now the image can be whatever the customer desires. Anything that they can imagine, they can prompt their way to it. Obviously, there’s a bit of nuance in that. It’s not just a case where you just say “Create a cafe”, and it creates an incredible cafe straight out of the gate. You still have to have a clear vision of what you want to achieve. The person who creates the image still needs to have creativity and direction. They need to know what they want.

We have built a system to try and automate that as much as possible, but ultimately, to create something really fantastic, it needs to be someone who knows how to create something really fantastic. It needs to be someone who understands lighting, composition, and colour. They need to have a creative mind and be able to come up with unique and interesting ideas and scenes. So, to a degree, you can’t fully automate the whole process, but the workflow that we have works well. A customer can upload images that they like, pick a product, write a brief description of the image that they want, and then our system will create an image. Then, in the studio, they can customise it. They can change it.

With that in mind, what would you now say is the ratio of human to AI involvement? Is it now mostly human? Was it always mostly human?

I’d say it depends. With this new approach, you can have a completely hands-free experience where the customer simply describes the image that they want, and then FrameKit essentially does all of it automatically. But will you get an incredible photo straight out of the gate? Probably not. All of the photos that I’m making at the moment to post on social media, I’m not using the automated flow. If you know what you’re doing, you can make a much better image in FrameKit.

See, a part of me really despises the fact that the word AI has now been coined as the term to describe image generation and large language models, because it couldn’t be further from the truth. There is no intelligence in a stable diffusion model at all. None whatsoever. So when an image gets generated, it’s not doing it intelligently. Really, it’s a glorified search engine. All of the images in the data set have a piece of text associated with them, and when you put in a prompt, it will calculate based on all of its training data what combination of pixels is most likely to be correct. There’s no intelligence, no thinking, no understanding of what’s in the image at all. That’s why AI can’t create a fantastic, original image. That is almost not possible. The most common output that you would get from a model would be the most common input, which by definition is average.

If the user tries to generate an image with FrameKit in a completely hands-free experience, they will get a very average output. But if someone who knows what they’re doing positions the product the way that they want to, do a couple of iterations on the image they’re creating? All of these things take critical, creative decision-making. And all of the tools that you have in FrameKit allow you to do that faster, easier, and more cost-effectively. Could you create a fantastic image in a completely hands-free manner? I’d say no. It’s probably impossible. It would probably always be impossible.

So, is FrameKit more or less human involved? I’d say it depends on the person. Some people just don’t have a creative capability. They don’t really know how. They don’t understand lighting concepts, composition, colour, colour space, all of those things. But at the very least, with FrameKit, they don’t need to.

You previously described the core premise in a very short sentence, “It’s meant to be a pack of templates to help you get started.” If you were to describe the core premise now, how would you describe it?

I would say it’s a powerful way for creative people to make great content without needing to learn incredibly complex, technical tools and spend vast amounts of money on incredibly expensive equipment. Think of the days when we were back in college. One of the biggest barriers to entry was needing to buy a camera, lighting, and all of these various things. You’re almost gatekept, in a way, from being able to create great things.

But the most prominent narrative in the creative space around AI is, “AI is taking everyone’s jobs”. In reality, I think what it’s doing is removing the gatekeeping. Now, creative people can make incredible things without having to spend huge amounts of money, and also without having to spend huge amounts of time. It means that people who can’t afford to hire someone to do all of the images for their products now don’t have to. They can create good enough images of their products in FrameKit. So, I think to summarise it, powerful tools for creative people that don’t cost an arm and a leg. But also, something that lets ordinary people create compelling content for their business without having to spend huge amounts of money.

So really, you’re putting more emphasis on the creative studio as a whole, rather than just that one aspect of it, which is the image generation?

The image generation is a cool tool, but the automated flow is not where the best content of FrameKit is going to be made. Because it’s automated, by definition, you’re going to get the most average-looking photos from that process.

For many people, that’s all they want, but for brands that want really incredible, creative content? The automated flow is probably not what they would want. They might already be spending thousands a month on creative agencies. They will still want the same level of creativity and ingenuity in their content, which you can only really do if it’s a person doing it.

How much further do you think your AI tools and your creative studio can go? What possible features can your audience expect in the future?

I’d say lighting is an interesting one, mainly for the automated flow. We’re going to train our own lighting model, specifically on how to relight an object that has been placed into a scene unnaturally. If we 3D render an object and put it on top of a 2D image such that it looks like it’s there in the scene, the lighting on the object won’t automatically match the lighting that exists in the environment. This is a common compositing challenge in film and television, a very common VFX challenge. You need to match the lighting. We’re trying to create a model that can do that automatically.

And the reason why we’re focusing on the automated flow now is that we need to think about what kind of customer is easiest for us to go after now. There’s no point in us chasing after customers that are harder to acquire at this early stage. Designers, photographers, creative people, creative agencies, those people who want the very best images, the best quality, the best tools? Those are the hardest customers for us to acquire now, because we’d need to invest all of our time into building these complex tools.

Photoshop has been around for almost 20 years. There’s a reason why the Creative Cloud apps are the gold standard: they’ve invested billions into developers to build all of these tools. Well, we don’t have billions. We have a couple of guys working part-time to build this, so we need to find a clever, scrappy way to build something that will appeal to someone now. And the one that I think will work here and now, in the short term, is the automated flow. The automated flow really appeals to those people who are satisfied with average images, which, for the most part, are small e-commerce businesses. The kind of people struggling to get images of their products without spending £500, £600, maybe even £1000 or more. Well, now they can use FrameKit for £25 a month, spend £50 to get the 3D model of their product, and then boom, they can create photos in the automated flow, and they’ll be happy with them.

So, up next in the pipeline is the polished, finished version of the automated workflow: the customer selects their product, selects some reference images, and maybe types in a brief description. That’s the feature that we’re going after now. That will probably be the one that we will be pushing in our marketing and outreach. Then hopefully, by the end of this year, once we’ve got some more capital coming in, we can invest more time into refining all of the studio tools. Then we’ll be in a much better position to go after the designers and the creative people.

I look forward to seeing these features. What are some of the product photos you’ve come up with now that you’re most proud of?

I’d say the most recent ones that we’ve done. Some of the images that we generate, I’m really blown away by. I can’t believe that we don’t already have thousands of customers, because there really is nothing else that can create images like this right now. That probably won’t last for long. I’m sure there’ll be competition at some point, but right now, there’s nothing that can create images as good as the ones that we can create in FrameKit. The ones that we’re posting on our social media.

There were some images that I created the other day for Absolut Vodka. This is where some of the limitations of stable diffusion models lie at the moment. Even the best stable diffusion models right now still struggle with natural textures. Things like moss, leaves. Small, fine, intricate details, it blurs them together, and almost mashes them into a mushy, flowy texture that just looks AI. But you only really get that when you try to create images that have huge amounts of detail and texture in them. For the most part, you don’t really have that problem. It’s only in images where you have lots and lots of elements. As soon as that comes into play, then you can more easily tell, “Okay, this is very clearly AI.”

But I expect that in the coming months, or the coming year, we will probably get to a point where you can use an off-the-shelf, stable diffusion model for generating images, and no matter what prompt you give it, the image that it generates will be so accurate and so detailed in the textures that it would be impossible to tell if it’s AI generated or not. I think we’re getting pretty close to that actually happening.

That’s awesome. Have you got any final words for the customers you already have, and possible customers out there? What’s one last thing you’d like to say to them?

Well, thank you to the current customers we have. I’d say thank you for trying out a tool with such a small team of people. The first early customers that we get are what’s going to make this whole company and this whole dream come to fruition. And for you, everyone on the team is working very hard.

And for future customers, come and use FrameKit. That’s what I’d say to future customers. Come and use FrameKit.

FrameKit is an AI-powered product visualisation platform built to help e-commerce brands create high-quality marketing images without the cost and complexity of traditional photo shoots. After pivoting away from rigid 3D templates, FrameKit now combines AI-generated environments with realistic 3D product renders, giving users both automation and creative control.

Designed primarily for small and growing online businesses, FrameKit lowers the barrier to professional product imagery — while still empowering creatives to refine, direct and elevate the final result.