The Right Side of the Brain Re-Invented?

Generative AI has been making head waves in the VC and startup scene in recent weeks. A refreshing and energizing debate – especially after months of rather unpleasant news about market correction, investor pullbacks, valuation drops and layoffs. A debate driven by tech, even more. As tech experts ourselves, who have assessed startups working in the space of Generative AI before, we are of course super hyped by the exposure the topic is currently getting within the startup ecosystem.

The topic was pushed to the forefront by diffusion models taking over Generative Adversarial Networks (GANs) as state-of-the-art AI models in image generation. Now they are expanding into text-to-video, text generation, audio, and other modalities.

Stability.ai and Midjourney are pushing the envelope there with their text-to-image models rivaling those of established AI labs. While Midjourney is reportedly profitable, Stability.ai secured $101M funding from Coatue, Lightspeed Venture Partners and O’Shaughnessy Ventures LLC, after releasing Stable Diffusion in August 2022. Stable Diffusion is an open source text-to-image model that – different from other generators – was made available publicly for free. Diffusion-based text-to-video generation also took major steps forward earlier this year, with Google and Meta announcing models for text-to-video generation – sooner than expected.

In October, Sequoia Capital brought the topic to everyone’s attention by putting together a Market Map on Generative AI, which laid out the main players for Code, Text, Image, Audio, Video, and other areas. Verve Venture then enhanced Sequioa’s heat map by adding the European players in the respective areas. Unsurprisingly, the map included AI startups we have worked with in the past as well.

Prospects are promising: The MIT Technology review described Generative AI as one of the most promising advances in the world of AI in the past decade. Sequoia estimates that Generative AI will have the potential to become a trillion dollar business and business analyst Gartner predicts a time-to-market of 6-8 years – with mass adoption in the near-ish future. Whether these predictions will actually come true or not, Generative AI will revolutionize tens of millions of creative and knowledge-based jobs and play a vital role in driving future efficiency and value.

What is Generative AI and How Does it Work?

To begin, let us first get the terminology straight. What is Generative AI and on which models is it based? So generally speaking, Generative AI uses existing content as source material, such as text, audio files, images, or code to create new and plausible artifacts. Underlying patterns are learned and used to create new and similar content. This differentiates from well-known Analytical AI, which analyzes data, identifies patterns, and predicts outcomes. One could say, Analytical AI mimics the left brain of humans, that is said to be more analytical and methodical, while Generative AI mimics the right brain – the creative and artistic side. Moving past the automation of routine and repetitive tasks, Generative AI is able to replicate capabilities that to date have been unique to humans – inspiration and creativity.

Moving on to the modeling types. To produce new and original content, Generative AI uses unsupervised learning algorithms. They are given a certain number of parameters to analyze during the training period. The model is essentially forced to draw its own conclusions about the most important characteristics of the input data. Currently, two models are most widely used in Generative AI: Generative Adversarial Network and Transformer-Based Models.

Generative Adversarial Networks (GANs)

A Generative Adversarial Network or GAN is a machine learning model that places the two neural networks – generator and discriminator – against each other, therefore called “adversarial”. Generative modeling tries to understand the structures within datasets and generates similar examples. In general, it is part of unsupervised or semi-supervised machine learning. Discriminative modeling on the other hand classifies existing data points into respective categories. It mostly belongs to supervised machine learning. One could also say the job of the generator is to produce realistic images (or fake photographs) from random input, while the discriminator attempts to distinguish between real and fake images.

In the GAN model, the two neural networks contest one another, which takes the form of a zero-sum game – one side’s gain being the other side’s loss. Currently, GANs are the most popular Generative AI model

Transformer-Based Models

The second model widely used in Generative AI is based on transformers, which are deep neural networks that learn context and meaning by tracking relationships in sequential data. An Example would be the sequence of words in a sentence. NLP (Natural Language Processing) tasks are a typical use case for Transformer-Based Models.

Context is provided around items in the input sequence. Attention is not paid to each word separately, but rather the model tries to understand the context that brings meaning to each data point of the sequence. Furthermore, Transformer-Based Models can run multiple sequences in parallel, thereby speeding up the learning phase significantly.

Sequence-to-sequence learning is already widely used, for example when an application predicts the next word in a sentence. This happens through iterating encoder layers. Transformer models apply attention or self-attention mechanisms to identify ways in which even distant data elements in a series influence on another.

How Generative AI Will Transform Creative Work

Narratives and Storytelling in general as a form of engagement will remain powerful, as humans are inherently drawn to stories – be it about a person, business, or an idea. However, good storytelling is difficult and requires content creation in different formats. While we see plenty of other areas being automated and made more efficient, the process of content creation remains manual and quite complex.

Generative AI will help content creators by generating plausible drafts that can function as a first or early iterations. AI will also help by reviewing and scrutinizing existing human-written text with regard to grammar and punctuation to style and word choice and narrative and thesis. By creating content that seems to be made by humans, Generative AI will be able to take over some part of the creative processes that until now only humans were capable of. Generative AI will be able to review raw data, craft a narrative around it, and put together something that’s readable, consumable, and enjoyable for humans.

Previously, Generative AI was mostly known for deep fakes and data journalism, but it is playing an increasingly significant role in automating repetitive processes in digital imaging and audio correction. In manufacturing, AI is being used for rapid prototyping and in business to improve data augmentation for robotic process automation (RPA).

Generative AI will be able to reduce much of the manual work and speed up content creation. Most likely, every creative area will be impacted by this in one way or another – from entertainment, media, and advertising, to education, science, and art.

Challenges and Dangers

While Generative AI brings enormous potential and the steps taken forward this year are truly astonishing, there is the danger of misuse. As with every technology, it can be used for both good and bad. Copyright, trust, safety, fraud, fakes, and costs are questions that are far from resolved.

Violent imagery and non-consensual nudity, as well as AI-generated propaganda and misinformation, are a real danger. Apparently, Stable Diffusion and its open-source offshoots have been used to create plenty of offensive images, as more than 200,000 people have downloaded the code since it was released in August, according to Stability.ai.

Pseudo-images and deep fakes can be misused for propaganda and misinformation. With more and more applications being publicly available to all users, such as FakeApp, Reface, and DeepFaceLab, deep fakes are not only being used for fun and games, but for malicious or even criminal activities too. Fraud and scamming is another problem, as well as data privacy, as for example health-related apps run into privacy concerns on individual-level data

Also, due to the self-learning nature of Generative AI, it’s difficult to predict and control its behavior. The results generated therefore can often be far from what was expected.

As with AI in general, machine learning bias is a tremendous problem in training data in Generative AI. AI bias is a phenomenon in which algorithms reflect human biases, due to the biased data which was used in training during the machine learning process. An example would be if facial recognition algorithm recognizing a white person more easily than a non-white person because of the type of data that has been used in the data training.

Therefore, we need to be sensitive to AI bias and understand that algorithms are not necessarily neutral when weighing data and information. These biases are not intentional, and it’s difficult to identify them until they’ve actually been programmed and poured into software. Understanding these biases and developing solutions to create unprejudiced AI systems will be necessary to ensure, existing biases and forms of oppression are not perpetuated by technology.

Despite the different challenges, technology would be incapable of developing and growing without challenges. Responsible AI gives way to avoid such drawbacks of innovation to a certain degree, or even eliminate them altogether.

What Founders and Investors Should Prioritize When Building & Scaling a Generative AI Startup

Research & Development: As so much regarding Generative AI is still in its infancy, research and development will have to be prioritized in any startup that wants to push the envelope in this area. A strong research team with sufficient senior roles with multiple years of experience in Machine Learning will have to set the basis on most cases. With a strong dedication to facilitating focus within research and accelerating research efforts, AI startups can differentiate against competitors and gain a competitive edge.

Modeling and Product Management: Building up a mature product organization is key for the commercialization of companies in the space. Strong product management competence with in-depth technical understanding is of the essence when operationalizing an AI business strategy. Implementing a product framework that supports the growing engineering organization and sets clear priorities should be on the to-do list as well. Investors should specially focus here from a Series A onwards, since most scientific founder teams in the space lack productization experience and need to hire experienced product leaders. This should be accounted for rather early in the process.

Security and Compliance: Both need to be a priority. It is important to actively track and manage any security vulnerabilities in the system. Guidelines to fulfill the necessary compliance and security requirements should be defined and implemented to achieve production-readiness. This is important particularly in a governance context, but also in general.

Responsible teams need to be aware of and understand the security requirements. There needs to be visibility over changes made to critical infrastructure, so possible malicious changes do not only become noticeable when they start affecting end-users. The tech organization should be able to quickly respond to security incidents in an automated way. Otherwise, detecting and resolving issues would need considerable manual effort. With startups and young companies with only loosely defined processes that often are still manual, this can become a security risk that needs to be on the radar.

Scalable Infrastructure: Generative AI startups should build a secure, scalable and automatically provisioned infrastructure that is easy to manage and controls the cost of computing and data training. The AI models described above require a lot of computing power, since the more combinations they try, the better the chance to achieve higher accuracy.

As startups and growth companies are competing in the Generative AI space, they are under pressure to improve data training and lower the cost of it. In addition, the carbon footprint of data training is an important factor in times in which impact is becoming an increasingly important measurement for investors. AI companies therefore need to strive for more efficiency in training methods as well as in data centers, hardware and cooling.

There should also be a plausible trade-off between the cost of training models and using them. If models will be used many times in its lifetime, they can bring a proper return on investment of the initial training cost and computing power.

Conclusion

With Generative AI, content creators will have technology at their disposal that will be able to present artifacts from the data and use it to generate new content that can be considered an original artifact.

Generative AI will increasingly be important in the creation of synthetic data that can be used by companies for different purposes and scaled throughout different formats. AI-generated synthetic audio and video data, derived from texts which were triggered by some initial human input, can remove the need to manually shoot films or record audio: Content creators can simply type what they want their audience to see and hear and let Generative AI tools create the content in different formats.

We believe that Generative AI will progress quickly with regard to scientific progress, technological innovation, and commercialization. While we are still at the beginning of this trend, a wide range of appliances is on their way and plenty of use cases are being introduced to the market – ranging from media and entertainment, to life sciences, healthcare, energy, manufacturing and more. Innovative startups tackling problems around manual and time-consuming processes in the creative industry stand at the heart of this development, alongside established platform companies such as Google and Meta. Generative AI will extend into the metaverse and web3, as they have an increasing need for auto-generated synthetic and digital content.

Safety concerns and harmful use of Generative AI, such as deep fakes, pose a challenge and might impact mass adoption with consumers and corporations. Security and compliance guidelines will have to take the growing challenge of bias and general importance of Generative AI governance into account.

As with other types of AI, repetitive and time-consuming tasks will be automated, eliminating certain portions of tasks and activities that are currently done by humans. However, instead of eliminating creative jobs, Generative AI most likely will rather support processes in the creative industry through automation, while there will still be a human in the loop as a controlling and refining instance at some point. As an assistive technology that helps humans produce faster, we will see humans and AI work together for better and possible more accurate results.