S&P Global Market Intelligence

451 Research Generative AI Digest

July 2023

Gen AI Digest

This report was published by 451 Research on June 26, 2023

Introduction
Welcome to the fourth of our Generative AI Digests. So much is happening so quickly in generative AI that we plan to produce one of these roundups a month for as long as it makes sense to do so. To give a better idea of the size of the market, we have produced our inaugural market forecast of the software component of generative AI. Figure 1 below shows the market expanding at pace, with aggregate revenue expected to rise from $3.7 billion in 2023 to $36 billion by 2028. Much more detail is available in other reports, as well as the data itself. We continue to augment these digests with various reports that you can find as part of our Market Insight service using our new Generative AI theme, plus you can set up inquiry sessions with our analysts.


The Take
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war," according to the Center for AI Safety. That sounds ominous and a little like a call to panic, which may have been part of the point, but we believe it is hyperbole designed in part to keep the spotlight fixed on AI as a major opportunity, and perhaps to freeze the market by scaring off would-be competitors or perhaps look to the government for some sort of liability protection. Or are we all just one more large language model from extinction? We doubt it. It is fair to say that the regulatory pressures on the AI ecosystem have ramped up in the last month with scary proclamations, but more importantly, the passing of the EU's AI Act in the European Parliament. The stage is set for a contrast in regulatory styles between the EU, the UK and the US, among other jurisdictions.

Product releases and updates
Oracle Corp. announced plans to develop generative AI capabilities within its suite of applications, by partnering with Cohere AI and using its foundation models. Cohere will train its models on Oracle Cloud Infrastructure — Oracle claims OCI has the highest-performance and lowest-cost GPU (graphics processing unit) cluster technology around — with what it calls an OCI Supercluster scaling up to more than 16,000 NVIDIA Corp. H100 GPUs per cluster. It is a big bet on Cohere, which already has many partners and counts Google as an investor.

Salesforce Inc.'s AI Cloud is a collection of new and previously announced products enhanced using generative AI and underpinned by what Salesforce calls the Einstein GPT Trust Layer, seeing trust as its unique selling point, as far as generative AI is concerned. Salesforce AI Cloud comprises Sales GPT, Service GPT, Marketing GPT, Commerce GPT, Slack GPT, Tableau GPT, Flow GPT and Apex GPT. All but the first two have been announced by the company in the last few months. AI Cloud adds generative AI components to its existing clouds, performing tasks such as auto-generating personalized emails, chat replies and code, among other use cases. Salesforce is leveraging its own large language models, such as CodeGen, CodeT5+ and CodeTF, as well as those from Anthropic, Amazon Web Services, Cohere, Google, OpenAI or a customer's own models through ties to Amazon SageMaker and Google Cloud Vertex AI.

NVIDIA has a new supercomputer designed specifically for the development of large foundation models. The NVIDIA DGX GH200 AI Supercomputer is powered by GH200 Grace Hopper Superchips and the NVLink Switch System. The Grace Hopper Superchips combine an NVIDIA Arm-based CPU with an NVIDIA H100 GPU in the same package, increasing the bandwidth between CPU and GPU while reducing interconnect power consumption. The DGX GH200 AI Supercomputer is expected to be available by the end of 2024, with Google Cloud, Meta Platforms Inc. and Microsoft Corp. expected to be early adopters, according to NVIDIA.

Advanced Micro Devices Inc. touts its Instinct MI300X chip as a generative AI accelerator and an eventual direct rival to NVIDIA's H100. The MI300X features eight GPU cores, 192 GB of addressable memory, 5.2 TB/sec memory bandwidth and 153 billion transistors. AMD demonstrated running the 40-billion parameter Falcon 40-B large language model on a single MI300X and claims it will be the most performant AI accelerator for AI workloads. The chip will be available as a single accelerator and on an 8-GPU board called the Instinct Platform. The MI300X begins sampling in the third quarter.

Meta's Facebook MusicGen can create music from text prompts. The model generates 12 seconds of audio and then users can, as an option, provide a reference audio from which a melody will be extracted. The model will extrapolate from there. MusicGen is a single-stage transformer model trained on 20,000 hours of licensed music from a dataset of 10,000 music tracks and additional data from Shutterstock Inc. and Pond5. Facebook apparently plans to release this as open source, although it is unclear if any issues would arise regarding the music on which the model is trained. A demo is available on Facebook's Hugging Face page.

Informatica Inc. has made various enhancements to its Intelligent Data Management Cloud, including the addition of generative AI functions to its CLAIRE AI engine, in the form of a natural language-based interface to IDMC that can be used to simplify and accelerate how enterprises consume, process, manage and analyze data.

Intuit Inc.'s GenOS is positioned as an operating system that allows customers to build financial generative AI applications. The technology provider — best known for its TurboTax, Credit Karma, QuickBooks and Mailchimp brands — suggests that, in addition to having a suite of tools, it will provide access to a set of proprietary financial LLMs. Areas of emphasis include addressing tax, accounting, cashflow and personal finance applications — providing advice or triggering follow-up actions.

OpenAI added "function calling" as an enhancement to its GPT API. This means each prompt to OpenAI's GPT-3.5 and GPT-4 models can also include instructions on how GPT can access external resources, such as other APIs, code or databases. GPT then chooses whether to respond to the query straight away or use one or more of those resources in addition. If it does the latter, the call is made and the results are returned to GPT for further processing. This essentially removes the shackles from GPT in the sense that it could only previously make predictions, and thus answer queries based on data on which it has been trained. Being able to make calls to external resources — and do so in a performant way — opens up new use cases for generative AI.

ChatGPT has a new head of product, Peter Deng, whose history includes a brief stint at Google before joining Facebook in 2007. He moved to Instagram as head of product in 2013 and then to Oculus, so he has experience managing product development from the early stages of key technologies.

Figure 1: Generative AI market revenue forecast

Source: 451 Research's Generative AI Market Monitor 2023.
© 2023 S&P Global.

Funding and M&A
Baidu Inc. set up a 1 billion yuan venture fund ($140 million) to incubate Chinese startups focused on generative AI. The Chinese internet search company made the announcement alongside the launch of a competition for developers to build applications off its ERNIE language model.

Cohere AI raised $270 million as part of its series C round. The company says it sees itself in direct competition with OpenAI, and while open-source foundation models are good in that they have spurred wide participation among developers, Cohere doesn't think they will be of high enough quality for most enterprise use cases.
Google was the lead investor in text-to-video startup Runway's series D round, which raised $100 million. Rogue VC was the other investor. Runway has raised $195 million in four rounds.

Elise A.I. Technologies has raised $35 million in a series C round led by new investor Point72 Ventures. The round brings overall funding to just under $67 million. The company's Elise chatbot manages communication between tenants and property management firms.

Video personalization tool provider Gan.AI raised $5.25 million in a seed round in late May. The round of funding was co-led by new investor 10xF and Sequoia Capital India Advisors.

Video generation firm Synthesia raised a $90 million services C round, valuing the company at $1 billion. The London-based company's round was led by new investor Accel, which was joined by new investor NVIDIA plus existing investors Kleiner Perkins and Firstmark Capital.

Vectara closed a $28.5 million seed funding round at the end of May. An LLM-powered search provider, Vectara emphasizes its "Grounded Generation" capabilities, which it suggests eliminate hallucinations. The round was led by Race Capital.

Contextual AI, a startup that plans to build enterprise-ready foundation models, has raised a $20 million seed round led by Bain Capital Ventures with participation from Lightspeed, Greycroft, SV Angel and angel investors. Co-founders Douwe Kiela and Amanpreet Singh both previously held research roles at Facebook and Hugging Face.
French startup Mistral AI raised €105 million in a seed round led by Lightspeed Venture Partners. The Paris-based firm was founded by Arthur Mensch, Guillaume Lample and Timothée Lacroix. CEO Mensch was most recently a researcher at Google's Deepmind, while CTO Lacroix and Chief Science Officer Lample both worked in AI research at Meta. Mistral AI plans to build its own large language model — hence the need for a large seed round — aimed at enterprise use cases, for launch in 2024.

Neeva has been acquired by Snowflake Inc. The company offered a private search experience and operated a freemium model, although the majority of its users were on its free plan. It had raised a total of $77.5 million in two rounds from investors including Greylock Capital and Sequoia Capital.

Politics and regulations
The European Parliament passed the EU AI Act with an overwhelming majority (499 votes in favor, 28 against and 93 abstentions). The legislation is now subject to negotiations between Parliament, the EU Council of Ministers and the EU Commission for further debate and potential reform. The bill retains the tiered approach to AI risk, with providers of foundation models potentially having particular obligations placed upon them in terms of being transparent about their training data and how it relates to copyright law.

UK Prime Minister Rishi Sunak has plans for a conference focused on AI safety to be held in London in September, to bring together AI experts and politicians from around the world. The UK is attempting to position itself as a bridge between the likely tougher approach taken to AI regulations by the EU and the looser — or at least less developed — approach of the US. The government also appointed technology entrepreneur Ian Hogarth to lead a new foundation model taskforce, reporting directly to the prime minister and technology secretary. The job of the taskforce will be to carry out research on AI safety and help develop international guardrails such as shared safety and security standards, as well as to bring together experts from industry, academia and government. It has an initial funding budget of £100 million.

Japan's government reaffirmed that it will not enforce copyright laws on data used for training AI models, regardless of whether the use cases is commercial or nonprofit — or even if the training data was obtained illegally. This is not a change in Japan's copyright law in any way, but it could be viewed as an attempt to remove perceived barriers to Japan once again becoming a technology leader, a mantel that it lost over the past few decades, especially where computer science is concerned.

The City of Boston has sent guidelines to city officials encouraging them to start using generative AI, if for nothing else than to understand its potential. It has also given access to Google Bard to all public officials that use Google Workspace.