Readers,
Ok, I give in: Let’s talk about AI. Interestingly, the public discussion, which normally doesn’t focus on policy, is dominated by policy thanks to the Future of Life Institute “pause” letter, some insane adjacent articles, and various efforts to reign in ChatGPT, including a rare instance of Italian tech policy leadership.
That leaves a nice opening for me to talk about the business side of things – specifically, the business model.
Let’s dive in.
– George (@coemannn)
THE BIG TAKE
Scenarios for the business model of AI
The rapid emergence of generative AI in 2023 led to a lot of speculation about its impacts on users, jobs, and humanity’s chances of survival... But there’s been much less discussion of how generative AI will actually make money. This question is fundamental to technology's ultimate impact on society, as social media demonstrated very clearly - we are still discussing how to tackle the ads-based business model today.
In some ways, it's hard to predict how AI will make money given how early it is, but there are really only three ways this plays out:
You pay the AI company.
You pay someone who pays the AI company; or
Someone else pays, and you are the product.
These options correspond to three possible scenarios I see for the AI business model. Each is worth understanding in more depth (warning: speculation ahead).
Source: PRISM
AI takeover:
This is the scenario everybody is currently assuming. OpenAI (and other large LLM developers) will grab all of the information online and provide it to people in natural language through ChatGPT and related tools like the AI search experiences.
How are they going to make money from this? Ads and subscriptions.
We’ve already seen Bing put ads in their AI-search product
ChatGPT4 has a $20 monthly subscription model, which may make OpenAI a multi-hundred-million-dollar revenue business already.
This business model's immediate success and sheer obviousness make it the baseline scenario. Looking at the potential consequences of this scenario, it is understandable why there is such alarm over AI’s impact on society. Here are some consequences we’d expect:
Max societal disruption: This scenario implies AI pulling in all of the information online and owning all of the user interaction with it, presumably, with compensation to content creators at near-zero levels. This would put the most people out of work out of any of the scenarios. AI will cover as many of the existing roles as it can. This is a big enough disruption on its own. One of the industries most disrupted will be news – this will be far worse than any impact the internet has had on the news industry thus far. Moreover, the information environment will become very fragmented – almost entirely individualized to a user’s interaction with the AI. This will partly be incentivized by advertising. You can guess the consequences.
Closed internet: Taking this to its extreme, people will put less of their writing online. That means the erosion of the open information environment we’ve become accustomed to. This, in turn, will further push people toward getting their information from AI. This will also have consequences for advertising. The future could be the ultimate in programmatic advertising - with AI making a range of buying decisions for users and constantly curating content. SEO will turn into AIO, where advertising will be constantly tweaked to ensure greater chances of it being picked by the AI system synthesizing and curating content. Getting to number one in the AIO could literally mean automated purchases of your product from every individual using the AI system.
This outcome is the closest scenario to today’s tech business model and can be seen as the recent internet era on steroids: attention to economics, inequality, and societal disruption but on a much larger and faster scale.
Plug-in universe:
This scenario is the future people are getting excited about with OpenAI’s release of plug-ins. OpenAI becomes more like an app store, providing access to specialized tools that can be accessed, in natural language, through your AI “front door”. Eventually, this won’t be in a ChatGPT dialogue box; it will be on your phone — maybe even with a voice (Goodbye Siri, it's been real).
The interactions here are still centralized through AI. But the business model is different, with “apps” competing for user dollars to provide specialized services. We can guess at some of the consequences of this too.
Max walled gardens: Users pick their favorite AI interface, kind of like they pick their favorite smartphone and related tech stack and app universe. Switching should be easier than with phones, but people will still likely have favorites they use most of the time. And AI companies will have significant control over everything that happens in their app ecosystems, setting rules and having a huge, centralized influence over the direction of travel. Dominant AI platforms like OpenAI will generate market power that allows them to exercise so much control that similar antitrust interventions as we’ve seen for app stores will be inevitable.
Branded content supply: Here, the main dynamic will be AI ecosystems competing with each other to attract the best “app developers” – in this case, “content suppliers”. The content suppliers will be subject to premiumization, with the content creators with better brands and higher quality providing an advantage to the platform where they supply their content. Asking an LLM trained on everything online will mostly give bad answers. “What is the world's future?” is a bad question to ask an LLM trained on data that includes your uncle’s random opinions. But it’s a really interesting question if trained on everything written by genuine experts on global trends. And lots of really specific insights and follow-up questions can be asked. The same is true with fashion. “What should I wear to the party tonight?” is a bad question to ask the internet; it’s a great question to ask fashion experts.
As in the first scenario, this could lead to a closed information environment. However, in this scenario, AI ecosystems will compete to improve their closed information environment by paying premium content creators. This can be analogized to the streaming market today: Companies pay billions to offer better content and attract users to their ecosystem of content.
This creates a “supply network effect” of sorts. Instead of more value to a user if more users are on the platform, there is more value to a user the more content providers write in this closed information ecosystem. The more value in that ecosystem, the more content creators want to provide there as well.
“GPT inside”
The last scenario is what I’d describe as “GPT Inside”. This is where OpenAI takes an “Intel Inside”-type approach, choosing to power other tools rather than sell directly to end users. This is much akin to how the cloud era developed: tools sit on the cloud and are enabled by it, and to the end consumers, the cloud company itself is somewhat incidental.
In some ways, this feels like the “best case” scenario regarding societal impact: AI becomes a tool to empower everyone rather than take power for itself.
Still, this will bring challenges:
Max privacy and security risk: In this scenario, many companies become reliant on AI and integrate their own data and processes, supercharging the AI's data and capabilities. But the AI company will be “behind the curtain”, making it invisible to end users and society at large. This would bring extreme levels of data privacy, cybersecurity, and surveillance risk, given that the major AI companies would have access to so much data. Therefore, The risks will be higher, while public scrutiny is lower: that’s a bad combination.
Industry concentration: Similar to the cloud, this will likely mean everything sits on a few big companies with massive infrastructure. Those companies will print money.
Wildcard - decentralized AI:
One more somewhat positive possibility is a decentralized future for AI. This comes in the form of on-device AI, acting as everyone’s personal AI assistant. This could even be done in a way that radically enhances privacy. Recent data portability regulations mean anyone can download all of their data from search engines, social media companies, etc. Why not download this all onto your device, delete it from the platforms, and have your AI assistant provide data as needed as it interacts with platforms on your behalf?
This becomes more like buying a product - an expensive AI assistant device, with over-the-air updates. I like this scenario so much that I can only imagine it as a wildcard.
What this all means for you:
We’ve been thinking about this ourselves, and while it’s an important question, we don’t know the answer yet. Many tech startups are already extremely freaked out that what they’re working on is already obsolete, and in some cases, they might be right.
What we think, though, is 1) this will take a while so don’t freak out yet, and 2) be ready to adapt to each of the above scenarios. At a high level:
If we’re headed for an AI takeover, focus on your human skills. For us at PRISM, that’s thinking in complex ways about the future, with less value derived from research. It means more focus on human advice and organizational impact. For you, it might be different.
If we’re headed for a plug-in universe, consider your unique content contributions and how you can create a differentiated role in serving this to an AI interface. In other words, build your own differentiated content and databases that could add value to a user with a natural language interface. You likely won’t win trying to make the interface, so focus on the content.
If we’re headed for GPT inside, it will actually be about the interface and tools you can build. So start thinking now about what AI packaging, AI UXs, and AI tools you could create for your customers using LLMs.
This is one of those inflection points. That means there will be new winners and losers.
HMM, INTERESTING
Top 5 - The eye-catching reads
Competition Policy International on the competition law implications of ChatGPT: Very interesting article laying out the potential problems - a menu of lawsuits we might expect! Some of the most interesting ideas and questions:
Generative collusion: Generative AI tools interact with each other. Collusion could happen algorithmically.
DMA applicability: Whether Generative AI platforms should be gatekeepers under the DMA.
Unfair competition: The extent to which AI production works very similar to that of other companies or brands and how this could form unfair competition.
Takeaways from the Summit for Democracy: Alex Engler from Brookings with a great breakdown of what we learned. In short: - Digital public services and expanding internet access were major themes. - Internet shutdowns and surveillance tech are the key risks regarding authoritarian tech enablers. These could be offset by democracy-enabling tech (e.g. financial inclusion tools) - AI is the emerging challenge (of course!)
China goes for the state-led approach to AI growth: China prohibits ChatGPT-like tools due to the risks they create to the censorship regime. Their new rules are actually very interesting to watch. But they create huge risks of China falling behind in the AI race, so they’re ramping up investment elsewhere.
The case for banning kids from social media: New Yorker write-up, with attention on the new law in Utah that tries to do this. Child safety issues have been growing for a while. It was only a matter of time before a movement to age-gate social media entirely started to build.
Russian hackers target UK institutions: We are surprised we haven’t seen this type of attack already, but we should expect more. An honor for the UK to be so high on the target list.
AND, FINALLY
My top charts of the week
The public loves AI in less democratic countries
So many good insights in the huge Stanford AI report. This was just one. A major concern to note how skeptical the US and much of Europe are of AI.
Source: AI Index
E-commerce share is leveling off
Huge pandemic boom, but it is leveling off - at a mere ~15%, much, much lower than what most assume. Long live brick and mortar.
30% vacancy is the new normal in SF
Companion charts here show 30% office vacancy and SF… and that almost 80% of organizations say they’re now in the new normal.