AI as Co-Creator: Insights from Wharton’s Kartik Hosanagar on the Future of Human-AI Collaboration

WAFFA
25 min readJul 11, 2024

--

On the evening of July 9th, we gathered for a live fireside chat with Kartik Hosanagar in San Francisco at the beautiful showroom of MillerKnoll to discuss AI’s current state and future, focusing on its impact on creativity, business, and human skills. Kartik, a Wharton Professor and Co-Director of AI at Wharton, shared insights from his research and experiences with his AI startups, including Jumpcut Media, which uses AI to help Hollywood find its next big movie hit.

Kartik emphasized the complementary nature of human and AI collaboration, highlighting how AI can serve as a co-creator rather than just a productivity tool. He stressed the importance of building trust in AI systems and discussed the evolving landscape of skills needed in an AI-driven world.

Addressing concerns about AI’s impact on jobs and creativity, Kartik offered a perspective on how AI could enhance rather than replace human creativity. He also touched on the need for a shift in mental models when approaching AI, suggesting we focus on how AI improves upon the status quo rather than expecting perfection.

Key Takeaways:

  1. Human-AI Collaboration: The combination of human and AI capabilities often outperforms either alone, especially in creative tasks. This is due to the different approaches of biological and artificial neural networks.
  2. Task Partitioning: In human-AI collaboration, it’s crucial to identify which tasks are better suited for AI and which for humans. For example, AI can handle data analysis, while humans excel at emotional resonance, creative decision-making, and going beyond the data.
  3. Trust in AI: Building trust in AI systems is a significant challenge. It involves educating users about AI’s capabilities and limitations, implementing transparency measures, and using techniques like agentic approaches to reduce errors and hallucinations.
  4. Impact on Creativity: While there are concerns about AI destroying creativity, Kartik believes AI can enhance creativity by freeing up human bandwidth for more complex tasks and lowering barriers for new creators to enter the market.
  5. Future Skills: As AI takes over more technical tasks, skills like empathy, creativity, leadership, and communication will become increasingly valuable. The ability to work effectively with AI will also be crucial.
  6. Education and AI: There’s a need to integrate AI education into curricula, teaching students how to use AI tools and understand their implications.
  7. Startups and AI: For AI startups, building a moat may involve unique go-to-market strategies, agility in staying ahead of trends, and focusing on industry-specific compliance and trust-building.
  8. Content Overload: As AI enables the creation of vast amounts of content, AI-driven recommendation systems will become increasingly important in content selection and consumption.
  9. Industry Transformation: AI is likely to significantly impact various industries, such as film production and distribution. It could potentially democratize content creation and change how movies are made and released globally.
  10. Long-term Perspective: While there are concerns about the immediate impact of AI, it’s important to view it as a general-purpose technology whose full effects will be realized over time, similar to previous technological revolutions.

These takeaways highlight the complex and transformative nature of AI across various sectors, emphasizing the need for adaptability, continuous learning, and a balanced approach to integrating AI into business and creative processes.

Kartik Hosanagar with WAFFA’s executive team (from left Shannon Grant, Kaitlyn Qin, Caroline Dahllof, Amy Peppers)

We are grateful for our sponsors:

MillerKnoll: MillerKnoll is a collective of dynamic brands that comes together to design the world we live in. Together, we form an unparalleled platform for building a more sustainable, caring, equitable, and beautiful world.

Pivot: For 50+ years, Pivot has been the trusted partner for companies seeking more than office solutions. Our creative minds and unbreakable relationships craft spaces that reflect your vision — spaces where people thrive and deliver results.

TRANSCRIPT OF FIRESIDE CHAT AND Q&A:

Transcript has been co-edited with AI for readability.

Caroline Dahllof: You have mentioned in various talks that AI or human plus AI beats AI alone and human alone when it comes to creative tasks. So why is this?

Kartik Hosanagar: First of all, there’s a lot of research that shows that for many tasks where we have enough data, AI, not surprisingly, is beating humans on average. But at the same time, AI plus humans are beating AI, so we still have a role. Why that is the case comes down to the fact that biological neural networks work differently than artificial neural networks.

We look at the same problem and approach it very differently than AI, so we have something new to add, and AI has something new to add relative to how we approach it. When you bring the two together, it is additive simply because the approaches are different. And especially as you go into creative spaces, that becomes increasingly important because AI is learning from lots of data.

The advantage today is that AI models, even when you go to non-text domains like stable diffusion and others, have a far larger data repository of training data than we do. However, where we can excel is in pattern matching and being able to go beyond the training data. Again, that’s where the two coming together add value.

Caroline: What does this collaboration look like?

Kartik: It would be very domain-specific. A lot of my recent research has been focused on human-AI collaboration and how to design and redesign workflows where AI is part of that workflow. So, I spent a lot of time thinking about what the role of the human is. What’s the role of AI? I’ll give you a concrete example from our own setting: my startup, Jumpcut Media.

One of our main products is a script-reading tool. The problem in Hollywood is that for every script that gets read, there are probably several dozen lying in an unread pile that no one’s going to read. What gets read are the ones by credentialed writers or those with access. And new writers’ stuff is just lying there unread. Beyond that, there’s no institutional memory of what was read by people.

So if Netflix says they were looking for a great elevated horror movie like Get Out, what else do we have? Nobody knows what was read three years ago or what colleagues have read, and so on. We have an AI pre-read of all these scripts so that everything is read. And so there are reports the AI system generates that humans can read.

And so often, you would get pushback from some creative people who feel threatened that it is here to replace them. And sometimes it’s pushback along the lines of AI cannot do creative, subjective assessment the way I can, beacuse I’ve honed my skills. We’ve designed our product to do many things humans are currently doing, like figuring out the arc of the character in the story. What is this similar to? What other movies is it like? How well did those things do? How different is it relative to all the other stories out there? Things like that.

We tell our clients to let the AI can pre-read to determine the story. What is it similar to? How well did those stories do? What are the characters? What is the arc of the characters? And does it fit our mandate? And if it fits our mandate, then your job is to read it and figure out how the story makes you feel. And if your entire mental bandwidth is focused on what did this make me feel emotional resonance, then you can do a much better job of that portion if you don’t have to waste your bandwidth on just keeping track of character arcs, writing the logline, and so on.

And so that’s one example of task partitioning. Of course, you can go into software development, where it’s not about task partitioning. It’s about a co-pilot who suggests how to autofill the code and so on. And so that’s another way to design. So it would be very domain-specific, but it’s roughly along the lines of either figuring out what AI is better at or what humans are better at in partitioning the tasks or a co-pilot that is allowing you to do what you’re anyway going to do, but just do it faster.

Caroline: That makes sense. So, how did you decide on the workflow between humans and AI ?

Kartik: There’s a lot of trial and error in this. And the trial and error is, with our users, it was not clear to us initially what AI capabilities are. My colleague Ethan Mollick calls this the jagged frontier, which is this idea that AI can be good at certain things, poor at certain things, but there’s no saying beforehand, the frontier is not very well defined, it’s very jagged.

It’s very hard to say what it will be good at and what it will be poor at. We saw that with our script analysis tool. When we work with clients, we try to figure out what their complaints are. What are they doing themselves? We compare it to a human reading the script, a human’s report versus the AI report, and try to see what they are able to do well versus not.

And then it was through a lot of this kind of trial and error that we figured out, okay, really most of what you are spending your time on AI can do but ultimately that human emotional resonance, AI can generate a report about human emotional resonance, but what does that report even mean at the end of the day? So that’s where we realized that’s a very tough one, tough nut for us to crack.

To be honest, for us as a company, it came from us trying to. We were so close to selling to this particular studio, and everyone there was sold. And then one studio boss came in there and said that the entire training for my staff is focused on how this story makes me feel, and your report on that is not useful.

We then spoke with the staff, who said they loved the product. They explained that it saves them significant time on weekends when they’re typically assigned numerous scripts and books to review. Previously, they had to skim through everything superficially, unable to engage deeply with any single piece. They would then make quick judgments aboutwhich materials seemed worth a closer look.

This feedback helped us realize our product’s true value. By efficiently identifying the most promising scripts, we’re saving mental bandwidth for the staff. This allows them to focus their energy on deeply engaging with the best material and analyzing their emotional responses to it, rather than rushing through large volumes of content.

That’s one example. I’ve got another startup that I’m working on right now, which is in the financial space. We are helpingtraders backtest their hypotheses. Whether it’s retail traders or institutional traders, people have lots of ideas, but not enough time to backtest them and figure out which ideas make sense.

After spending considerable time working with our clients, we came to an important realization. We found that hypothesis generation is an area where humans can still add significant value. While AI can suggest ideas, it’s the human judgment that determines which of these ideas are truly interesting or worth pursuing.

For example, a human might hypothesize that insurance stocks are affected every time there’s a hurricane. The role of AI then becomes analyzing this hypothesis. Based on this hypothesis, it can determine what kind of market action might beappropriate — whether to buy, sell, or perhaps purchase an option.

Our AI can do all of that analysis, but the hypothesis generation and even figuring out which hypotheses are worth pursuing, let’s have a human do that. But it’s all trial and error.

Caroline: Yeah, that makes sense. How do we establish trust in the collaboration with AI, especially in domains where you don’t have expertise?

Kartik:

Trust is a huge issue around AI, and it has many layers. When we started working with companies, trust was the biggest hurdle, and it still is.

One layer of trust isn’t just about making up facts and trusting the AI output. Sometimes, the trust issue is around why someone should work with this tool and provide data to a company that might eventually replace them and cause them to lose their job. This concern revolves around the end goal — whether the AI system is designed to replace human workers.

We’ve spent a lot of time on this co-pilot terminology to educate our clients. Our vision is not to substitute teams but to enhance their capabilities. For example, instead of a team reading only 10% of the scripts, they can read 100% with AI assistance. This approach helps clarify that AI is there to augment their work, not take over their roles.

There are two key points regarding hallucinations and trust. First, people often have the wrong mental model of AI, expecting it to be perfect. When AI fails, they lose trust completely, even though AI can offer significant value. The mental model should be about whether the system is better than the status quo and how to manage its shortcomings.

For example, we tell users not to expect perfection from the AI but to see it as a tool that still needs human oversight. We have built systems to mitigate hallucinations, similar to perplexity search. Perplexity provides AI-enabled search results with citations, helping users verify the information.

We have implemented features that offer key takeaways and allow clients to ask questions like “why?” to get explanations. This approach is not limited to scripts but also includes summarizing social media comments for trailers. For instance, when analyzing social media sentiment for movies like “Barbie” or “Flash,” we could advise showing more of popular characters based on sentiment analysis and provide the specific comments that informed our advice.

We use agentic approaches, where multiple AI agents handle different tasks. For instance, in translating a book, various AI agents handle editing, localization, and cultural context, mimicking how humans would do it. This method helps address hallucinations by having an AI critic provide feedback, ensuring the summaries match the source documents.

Caroline: How long does it take them to process a script? How long are scripts typically?

Kartik: A script is usually about 120 pages, but it’s dialogue-heavy, so not as text-dense as a book. We also handle books, which can be 400–500 pages. Our AI can read and provide a detailed report on a book in about two to three minutes. Clients tell us these reports are very accurate and detailed.

Caroline: Wow, that is fast. Recent AI models, such as Claude and Midjourney, demonstrate capability in tasks traditionally considered uniquely human, such as creative writing and visual arts. How does this affect our view on creativity and thinking? What does it mean for human identity in creative jobs?

Kartik: Let’s talk about where we are today. I feel that today, and when I say today, maybe in the next two or three years. One is that these AI systems could completely destroy creativity, which is a concern that a lot of creative folks have.

One of the first few strikes against AI was from Hollywood. The Writers Guild of America had a strike, the Screen Actors Guild had a strike, and it had many pieces, but one of those pieces was, “We don’t want you to use AI for writing,” and so on. The viewpoint there is AI can destroy creativity, and that some of the most beautiful movies — like my favorites, “Shoshana Redemption” or “Life is Beautiful” — come from moving human personal experiences or human creativity. But at the same time, 80% of the movies out there are garbage. So, you could say that many of these other movies could have been written by AI.

The other view, which I subscribe to, is that AI can actually be a very powerful force in enhancing the creativity we see in many ways.

As I mentioned, one way is that it frees your bandwidth to focus on the most creative and difficult parts of the tasks. It allows you to go deeper and do those parts better versus taking care of the more tedious manual pieces, even with scripts. So that’s one piece.

The other piece is that AI can help with creativity in another way. I’ll use the movie example, but this is true elsewhere. Today, if I have a brilliant idea — and I did have a brilliant idea; you mentioned that I had a script that never got made. If I have a brilliant idea, I have to go through so many gatekeepers before I can make that. These gatekeepers have to bless me and let me into the system, and then it takes at least about $10–15 million to make that movie.

We’re not far from a point where the video will have its ChatGPT moment, where somebody with a great idea, sitting at home in their garage, will make a movie that looks like a $5 million high-production value movie. But they just sat and made it at home with a camcorder with a few friends. It was just a brilliant idea, and they were able to pull it off. It’s going to lower those barriers and allow so many more creators to enter the market and to be able to contribute. So that’s another way to unleash creativity.

There’s also the idea that AI doesn’t have to necessarily only learn from past data. You have ideas in machine learning, like reinforcement learning, which is the idea of trying new things and observing what happens. Trying new things is what creativity is all about. You can bring in reinforcement learning, and now you can get these systems to write and to create.

We’ll probably end up in a world where AI is involved in most of the things we see. Especially for routine creative stuff, like creating commercials, creating ad copies, and so on, it’ll be heavily dominated by AI. It’s the highest end of creativity that’ll be left for humans. We might even have the equivalent of handcrafted stuff, which you charge a premium for, but you better be good at it. We’ll have something like that, where somebody certifies this is purely human-made.

And the director — you go and watch their movie because there’s no AI in this movie. You watch that, you pay for that stuff, but the rest of the stuff, you’ll watch it at home and say, “Okay, this is AI-generated, AI-contributed.” So that’s what I kind of see.

Caroline: How do you think we will deal with the influx of content?

Kartik: There will be just too much content out there, right? And I think when there’s so much content out there, you need something to help you select. And unfortunately, it will be an AI algorithm that will tell you to watch this; don’t watch this. It’s already happening with YouTube and Netflix algorithms driving most of our choices.

Five years ago about 90% of the choices on YouTube are driven by its recommendation algorithms, 80% on Netflix driven by algorithms, and so on. And it’s going to be 99% in the future. So when there’s so much content, we will not be able tonavigate it. The algorithms will drive choice. So, I think people who control these platforms will have disproportionate power.

Caroline: What are the skill sets that would be valuable?

Kartik: Seven years ago, people would ask me whenever I did a talk on AI what skill sets would be valuable. I’d say,coding, development, data science, and so on. AI is now taking over those skills.

We’re now seeing a shift towards the importance of skills typically associated with MBA programs. While AI can handle many technical tasks given sufficient data, it’s the human skills that are becoming increasingly valuable. These include decision-making, empathy, and creativity. The ability to navigate complex situations, understand and relate to others, and generate novel ideas are becoming extremely important in an AI-driven world. These uniquely human capabilities are likely to be the differentiators in future workplaces.

Many people have stayed longer at companies because of the culture or because a specific manager was amazing, right? And those are the soft skills that ultimately will matter more. It will come down to empathy, creativity, leadership, and communication.

I have a thesis I’m developing about the evolution of human society. The industrial revolution marked a significant shift in our species’ development. Before that, we were a society that primarily valued physical strength and manual labor. The industrial age transformed us into a society that prioritized intellectual capabilities and information processing.

Now, as AI becomes capable of handling much of our information work, I believe we’re on the cusp of another majortransition. This next phase will likely emphasize uniquely human attributes such as consciousness, interpersonal connections, self-awareness, empathy, and creativity. We’re moving towards a society that values these qualities more than ever before.

This shift represents another significant evolution for the human race, potentially as impactful as the changes brought about by the industrial revolution. It will likely redefine how we view ourselves, our work, and our relationships with each other and with technology.

Caroline: How is Wharton preparing current students and alums, and how can we interact more with Wharton?

Kartik: We started AI at Wharton in 2019, so well before ChatGPT. The center’s vision was that AI is a general-purpose technology that will change every industry. We have to re-skill people at scale, so our center was created to think through some of these issues.

We started by launching AI courses at Wharton in 2019, and I taught that, and my colleagues are teaching that as well. Then we decided to make courses available for professionals outside, so we’ve launched courses, four-course specialization of Coursera, we’ve got one on Wharton online, but the other thing that’s happening is that the space is changing so rapidly that what I recorded a year ago, feels dated at this point. So when people email me and ask if I should take this course, I give them my recommendation and tell them of any caveats and limitations because the field is moving so fast.

We’ve started to do a few things now. We are doing an open enrollment AI for business transformation course. We offered the first version in San Francisco this spring, the second version in the fall in Philadelphia, and the spring version will be on our SF campus. So we’ll do it every year, one version in spring in SF and one in fall in Philadelphia. By the way, it’s free for alumni. So if any of you who feels like, there’s too much going on, I need to catch up, as an alumnus, you sign up, and I think Wharton gives you, I forget, two courses or something of that sort for free, so you just sign up, and it’s a one-week course, from Monday to Friday, we’re offering a deep dive into AI with two versions of the course: one for the uninitiated and one for technical people like CTOs and CIOs.

I realized that the courses I’m offering can become dated quickly, so I’m trying to be more active in sharing updated information. I recently launched a Substack for those interested in keeping up. You can find it at hosanagar.substack.com, where I discuss recent trends in AI and their implications at least once every two months. We’re doing a few things, and we plan to do many more for alumni education,

Caroline: Thank you. So before we start the Q&A, what advice do you have, especially for startups trying to figure out what they should do with AI?

Kartik: In terms of startups, I would say the biggest thing I’m experiencing with the two startups I’m involved with, and a few other startups I’m advising, is twofold.

First, AI is pretty much allowing you to reinvent everything. If you look at software as a service, you can reimagine the whole stack. Every SaaS company is under threat, and new companies can completely reimagine it. A two-person, five-person, or ten-person company can provide offerings that compete with 200–300 person engineering teams. So, the canvas is wide open.

Second, while the canvas is wide open, it’s true for every competitor of yours, too. For everything you can do quickly, a competitor can be a very fast follower and do it in no time. We actually created this product category in Hollywood in the fall of last year, and today we’ve got at least six competitors that claim to have the same offering. This happened within nine months. People see this and they go and copy it.

So, the challenge tends to be that when you build AI applications, it may often be the case that the magic is happening at the foundational model level. Every competitor of yours has access to the same foundational models, like those from OpenAI. You have to think hard about what is your moat at the end of the day.

It’s tough today to have a moat in software built on top of the foundational models. The moat comes from either having a crazy amount of proprietary data, which is hard for startups (incumbents have an advantage there), or it comes from very interesting, innovative architectures. You have to stay ahead of the curve in terms of these.

We’ve embraced agentic architectures, evaluations of LLM outputs, and so on. We’re trying to do a bunch of these things, but for everything we do that is publicly visible, it’s only three to four months before a competitor copies it.

So, it is a big open canvas with lots of opportunities, but it is very easy for fast followers to come in.

Caroline: That reminds me of something Ray Strata said. Your competitors can copy your processes and features, but your moat is how fast your organization can learn and adopt.

Audience Question: One thing I’d like to understand from you is mental models when it comes to people perceiving AI-based content. Given that we know people’s psychology and skepticism, do you have any thoughts on how we can change that perception to build products that increase productivity?

Kartik: Unfortunately, I don’t have any silver bullets. It’s one of those things where certain strategies can sometimes work. Companies often don’t know what employees want. For example, when we introduced a tool to Hollywood studios to increase productivity, the initial response was often “no thank you” for various reasons. However, once we made the tool available, employees started using it and became advocates. In some cases, employees who you might think would resist because they feel the tool replaces their jobs actually use it because it saves them time — like their weekends.

Another important aspect is education. You must invest heavily in AI education if you have an AI company. This realization made me more active on LinkedIn in recent months. At my startups, I’ve seen that AI education is crucial. For example, many clients don’t know how to assess an AI product. We had clients who took months to start a review process, often forming committees to create evaluation rubrics. At some point, we decided to create and send them a rubric ourselves. It sounds silly, but it worked. The clients appreciated it, saying we saved them time.

AI education is a big piece, and there are many ways to approach it, such as through newsletters or founders being active on social media.

Audience Question: My question builds on the last topic about creating moats for startups in the AI age. In a B2B context, AI technology might not be as strong a moat as it used to be. Access to customers isn’t an advantage because companies like Microsoft or Google can easily integrate and subsidize AI. So, for an AI startup in B2B, what do you think about building a moat? Would it involve network effects or something else?

Kartik: It’s very challenging to have a strong tech moat with B2B SaaS, especially in AI, where much of the magic happens at the foundational model level. However, network effects can be valuable. Another critical aspect is a unique go-to-market strategy. Companies that figure out an effective AI go-to-market approach will be very successful. For instance, Lemonade, an insurance company, started with zero data. They emphasized a great customer experience and low cost to attract users. Once they had users, they leveraged customer data to enhance their AI-driven approach.

Distribution advantages can also play a role, although incumbents have an edge here. Agility is another crucial factor. If you can show clients that you are current and staying ahead of the curve in a fast-changing world, it’s valuable. For example, I had a call with a CTO today who was grilling me about fine-tuning and other aspects to ensure we were on top of it. He finally said he wanted a partner who could stay current and agile. So, being agile, fast, and financially stable is one way to build a moat.

Audience Question: Over the last 10 or 15 years, we’ve seen startups get funding from VCs, but a lot of that money ends up going to Facebook and Google for ads and customer acquisition. That’s been the direct funnel — from raising money on Sand Hill Road to it ending up in Mountain View or Menlo Park. With AI and its disruptive impact, as a startup founder, I wonder if all my VC money will eventually end up in the hands of OpenAI. Where can we differentiate or do something to prevent that from happening, or is that just a fate we’re all doomed to see?

Kartik: It’s a great question. I do have no doubt that a lot of the money you raise will end up with OpenAI. It’s similar to the cloud world, where many cloud and SaaS companies have emerged, but the hyperscalers are undoubtedly the winners. The equivalent of the hyperscalers in this world will be OpenAI and the likes of Anthropic. So, there’s no doubt that part of the segment will win. However, just like AWS and Google Cloud make a ton of money, many SaaS companies are doing incredibly well even though they’re built on top of the hyperscalers. There’s room for AI companies to build on these foundational models, differentiate within an industry, deliver value, and capture that value.

It will come from the unique go-to-market strategies we discussed or from having a team that’s ahead of the latest trends. While there are fast followers, if you can consistently stay three to six months ahead, your clients will count on you. However, if I were a VC, I would worry about where the moat is with these startups. Ultimately, I’d question if all the money I’m investing is going to those infrastructure companies and whether I should be playing in that space.

Audience Question: Have you explored how primary education might shift? You mentioned things like consciousness, empathy, creativity, decision-making, and emotional resonance as being top-tier human traits. Have you started any research or collaborations on how education might change?

Kartik: I personally haven’t done much, but I’m very interested in it. There’s a professor at Carnegie Mellon, Po-Shen Loh. He does a lot of talks on this topic and focuses on training people in math in an AI world. He emphasizes creativity in solving math problems and proofs. The US used to struggle in the global Math Olympiad for decades, but they’ve won twice under his leadership. His emphasis on creativity was key. Now, with AI, his approach focuses on teaching creativity in math.

Those aspects — creativity, empathy, decision-making, communication — will matter a lot. Additionally, learning to work with AI will be crucial. It’s a unique skill set: knowing when to use AI, what to outsource to AI, and when to validate AI’s outputs. These skills are critical. Currently, education systems are denying students access to these tools. I’ve been encouraging my kids and their friends to learn about AI. This summer, I ran a summer camp for six of them, teaching AI concepts in the morning and having them implement and demo something by the end of the day. For instance, one team built a custom GPT for recommending Roblox games and another for book recommendations. It’s about teaching people how to use AI tools and understanding the implications of AI in navigating information.

Audience Question: My question is about a report from Goldman Sachs about AI, which suggested companies spent too much money and got too little value. I work in data for our company, and we’ve heard similar customer concerns. There have been many AI winters in history. What makes you think this time is real?

Kartik: I do believe it’s real. AI is a general-purpose technology (GPT) like the printing press, electricity, computers, and the internet. GPTs change the basis of competition, society, and the global and local dominance of countries and companies. Research consistently shows that AI has the properties of a GPT. However, the impact of GPTs always comes with a lag because they evolve rapidly, and early investments often fail. Organizational learning from these investments is crucial.

For example, in 2017, I wrote an article in Harvard Business Review titled “The First Wave of AI Will Fail,” emphasizing that early investments in AI would fail because AI is a GPT. The impact comes later, with innovations following the initial investments. My PhD research included studying the IT productivity paradox, where companies investing heavily in IT saw little productivity gains. It took time for innovations to show results. Similarly, AI’s impact will come with a lag. LLMs are the first significant breakthrough, and more innovations will follow in the next few years.

Audience Question: Right now, I’m often advising boards on AI oversight and governance. We talked a lot about what and why, but not about who. Who should be at the table for strategic goals and evaluations? CFO involvement is happening earlier than before. What is your suggestion for who should be involved in AI oversight and governance? Often, companies hire expensive technology talent, but these individuals look for problems to solve rather than focusing on what matters for the use case.

Kartik: It’s a great question, and there isn’t a definitive answer. AI innovation structures are still being studied to determine what works best, whether centralized or decentralized. I have a view on this, but it’s not necessarily the right one. When Google announced it would be an AI-first company about eight years ago, it needed to figure out what that meant and who would be involved.

Google’s approach was to diffuse AI knowledge across the organization rather than centralize it in one team. They started by creating centers of excellence, like DeepMind, and then focused on spreading that knowledge. Education was key, with programs for employees. Additionally, engineers were given the equivalent of a sabbatical to spend six months in the center of excellence, working on projects related to their products. Sundar Pichai mandated that every product team integrate AI in some way, even if the ROI wasn’t immediately positive.

This approach worked at Google, but it might not be generalizable to other industries like insurance or oil and natural gas.I prefer to diffuse AI knowledge over time rather than centralize it. Companies need to focus on both producing and consuming AI. It’s important to ask every team what AI tools they could use, even if they still need to build them. Hackathons for non-AI people can help managers figure out how to integrate AI into their workflows.

Audience Question: I work in financial technology, and you mentioned that startups need to be fast, agile, and ahead of the competition. However, in industries like finance, even after a decision is made, finalizing legal agreements takes six months, and the entire sales cycle can take two to four years. Does being ahead by three to six months matter in such slow-moving industries?

Kartik: Great question. Some of this is industry-specific. In industries like finance, compliance is crucial. Bringing a compliant AI solution is a huge advantage. In Hollywood, for instance, providing a solution that is SOC 2 compliant early on is valuable because many startups can’t offer that. In finance, it’s similar. Being compliant and de-risking AI efforts is important.

Being ahead of the competition is about more than just the product being three to six months ahead. It’s about being a reliable partner, not just a vendor. Clients need to trust that you can keep up with fast-changing trends. For example, in Hollywood, clients want to know that you can stay current and manage new developments. It’s about being their brain trust, especially if they’re not a technical industry. Convincing them that you’re that partner is key.

In some industries, being ahead means de-risking AI efforts by being compliant. In finance, that’s particularly important. It’s not just about having the latest product but also about ensuring it meets compliance standards and can be trusted to stay current and relevant.

--

--

WAFFA
WAFFA

Written by WAFFA

Wharton Alumnae Founders & Funders Association (WAFFA) accelerates the success of women in the startup ecosystem. Join us: HelloWaffa.org

No responses yet