The emergence of Foundation Models such as Large Language Models (LLMs) represents an unprecedented advance in Artificial Intelligence (AI). But the widespread application of LLMs brings significant technical risks as well as ethical and social challenges. This Cheat Sheet shows how investors can bridge the risk in LLM with the help of Technology Due Diligence. Find out:
What should I ask an AI company early in the funnel to figure out if their Machine Learning is good? This is something we get asked quite often from clients, especially VCs who screen their deal flow. With the recent trends in Generative AI, the GPT models for instance, we see more AI companies arising than in the years before. Now, VCs are tasked with selecting the most promising of these companies to present them to their investment committee members. Obviously, quite often the question comes, if this is really “good ML”. Companies will claim to be using AI, but what is it really and how strong is it? We as technology experts are quite often asked by investors how they can separate the good from the bad AI. That is why we wanted to give some early heuristic to think about the quality of a setup in the form of three questions.
First Question: Are You Dealing With Machine Learning or Traditional Rule-Based Systems?
The first distinction one can draw is whether you are really dealing with ML or a hand-crafted rule-based system. In other words: Does the tech involve statistical learning from data or not? Try ruling out, if it is a system that improves when trained with more data. Then it is ML. In addition, self-learning ML includes self-supervised or unsupervised methods as well. The question of whether and how best to allow such self-improvement is called “online learning”. So, is this the case, or is it mostly a system that works on certain rules, that are basically set by developers beforehand and that is not trained with data or self-learning.
The latter is usually not real AI — at least not in the more recent understanding. So that is the first question: You can distinguish between a more rule-based technology and an AI or ML technology where they have a certain statistical learning model that they train or where they even build a model themselves and by that have real ML capabilities within the company.
Second Question: Do They Make or Buy?
The second level of distinction: Are they developing the ML themselves, or is it a model provided by a third party? There are open source models or proprietary models from such third parties. If it is open source, very often companies will adapt or fine-tune those models themselves. In that case it is not really developing something from scratch, but also not simply taking something that already exists either. So, an example would be the recent turns of Chat GPT, where you can use the GPT models of Open.AI for your service. They are open source, and you can use them under certain licensing models. So, here you can distinguish, whether it’s external ML capabilities, used and changed for their purposes (which can also be a great business model) or whether it is self-developed ML, which is required for certain types of technology companies, and which usually goes much deeper.
Third Question: What Kind of Data is Being Used for Training. and Who Owns it?
With both of the previously mentioned ML models, there is an even deeper heuristic to understand, with regard to whether the ML is good. So, if you have figured out, if the company uses a model that is self-made or bought, then the bigger question is, how has this model been trained? Models will be a commodity in the next years and there will be lots of models and computing capabilities. Getting to a model will not really be a differentiator. There are and will be a lot of extended commodity models. And then the really interesting question will be, how have these models been trained? With what type and quantity of data? And have these models been trained by the company you are looking into? Owning data will be a real moat in the AI/ML centric years to come.
If the company uses its own data, then there are really two vectors to distinguish. The first one being, the quantity of data points in the data set. So, understanding how strong, how statistically significant certain effects in that data set will be. Obviously, a larger data set is always more interesting to train your model, so that you can rule out statistical errors and just have a stronger confidence level. Here the aspect of data diversity cannot be underestimated. How well does the available data cover the space the model lives in. Some companies are able to produce huge datasets from their own operations, but often they have the problem that their datasets are too narrow to improve significantly.
The second vector is the proprietariness of this data set. So the interesting question here is: How did the company get this data set? Is it publicly available? Is it obtained by certain suppliers? Or is it maybe even collected by the company itself? The more private the data set is, the stronger the capabilities of this company may be to serve a certain case and by that build a successful business model.
Public Data
A good example for a model being built on publicly available data, would be an application that predicts certain weather movements. This is a standard case, everybody could build a model based on these data points. So you have data that is quite publicly available, that everybody can obtain. You may have to pay for them, but you can easily obtain them.
Semi-Private Data
An example for using data, that is a little bit more private, would be a model in the predictive maintenance space. You get data from certain machines, OEMs machines, manufacturers, etc. You get input data on machine lifecycles, machine usage, and everything and by that train your model to predict certain maintenance intervals. So here the data is not publicly available, but it isn’t exclusive as well. It is still available to a certain number of suppliers that you would need to reach out to. Potentially another company, a competitor for instance, could get a contract with these suppliers and also obtain this data. So it’s more private, but it’s still available to others.
Exclusively Private Data
Then the third and at least in our understanding the most interesting case is the non-available or exclusive data sets. So that would mean the company training the model would collect that data set themselves. For example, with their own machinery, IoT infrastructure, sensors, etc. Therefore, they will be creating a data set that will only be available to themselves to train their models. This will in itself create a very strong mode. Why? Because it will be really hard for other companies, for competitors, to replicate that type of ML model. After all, they would first have to build up the database themselves, then build the model, and then combine these capabilities.
Bonus Question: Is there a Verticalization Strategy?
Now, there is one interesting development that we see regarding these very private data sets that people are trying to obtain to build their moat. More and more of these AI companies are understanding that they need to privatize the data they want to train their model with, in order to build a successful business model. How can they achieve this? Obviously, collecting data or even building machines to do that is really asset heavy and takes a lot of time. This is not the fastest way, it’s not really the startup way. So, what we are starting to see is that really IP academic and AI companies are going into M&A cases with more traditional industry companies. Quite often, some IoT hardware components are part of this. So, for example, hospitals in the healthcare space, or OEM or small manufacturers, or even service companies in the industrial space can be interesting for these AI companies. The strategy being, to obtain these data sets by buying out the companies in itself and having exclusive data access.
So, what we see here is companies deciding to verticalize not by building the assets from scratch and thereby becoming asset heavy themselves, but by understanding that in order to prioritize their ML and to train it properly, an early stage acquisition of a traditional industry player is quite an interesting case. Why? Because usually these players come with a vast quantity of unused data that they don’t realize the value of, as they don’t have the ML capabilities. But combining the industry data from the old industry with the ML capacities of an AI startup, will be quite key in building very interesting and predictive models in the long run. So, we are assuming that we will see a lot of companies trying to buy out companies that own data sets to train their ML models. And we are assuming, we will even see an M&A spree in that sense. We are more than interested in what AI companies will take over traditional industry companies to obtain that data in an effort to successfully verticalize.
Conclusion
So to sum it up, there is this simple early heuristic, when trying to identify the quality of ML. First, you would have to figure out, is the company really doing ML? Or is it just a rule based system? Second, you would need to clarify whether the model is bought or built. And then third, you would have to question the database that the model was trained with. And here the common understanding is the bigger, diverse, and more private data set your train with, the better, in terms of building a differentiated mode and potentially becoming successful as a company. A major key in that will be the acquisition of traditional industry players with vast untapped data in order to verticalize.
Technology is one of the most critical instruments in the sustainability tool kit. Next to governmental regulation, changing social norms and morals, individual consumer behavior, and more, climate tech is a core pillar. But as a lot of climate tech is still very much in its infancy, and the companies ideating and building them are oftentimes far from profitability, venture capital plays a crucial part in fostering startups that can potentially solve environmental and social challenges through innovation.
Since it is a major future growth area, this whitepaper wants to look into the current state of climate tech and impact venture funding in Europe and which climate tech areas should be on top of every investors’ and founders’ mind.
Download the whitepaper “State of Climate Tech” now!
Generative AI has been making head waves in the VC and startup scene in recent weeks. A refreshing and energizing debate – especially after months of rather unpleasant news about market correction, investor pullbacks, valuation drops and layoffs. A debate driven by tech, even more. As tech experts ourselves, who have assessed startups working in the space of Generative AI before, we are of course super hyped by the exposure the topic is currently getting within the startup ecosystem.
The topic was pushed to the forefront by diffusion models taking over Generative Adversarial Networks (GANs) as state-of-the-art AI models in image generation. Now they are expanding into text-to-video, text generation, audio, and other modalities.
Stability.ai and Midjourney are pushing the envelope there with their text-to-image models rivaling those of established AI labs. While Midjourney is reportedly profitable, Stability.ai secured $101M funding from Coatue, Lightspeed Venture Partners and O’Shaughnessy Ventures LLC, after releasing Stable Diffusion in August 2022. Stable Diffusion is an open source text-to-image model that – different from other generators – was made available publicly for free. Diffusion-based text-to-video generation also took major steps forward earlier this year, with Google and Meta announcing models for text-to-video generation – sooner than expected.
In October, Sequoia Capital brought the topic to everyone’s attention by putting together a Market Map on Generative AI, which laid out the main players for Code, Text, Image, Audio, Video, and other areas. Verve Venture then enhanced Sequioa’s heat map by adding the European players in the respective areas. Unsurprisingly, the map included AI startups we have worked with in the past as well.
Prospects are promising: The MIT Technology review described Generative AI as one of the most promising advances in the world of AI in the past decade. Sequoia estimates that Generative AI will have the potential to become a trillion dollar business and business analyst Gartner predicts a time-to-market of 6-8 years – with mass adoption in the near-ish future. Whether these predictions will actually come true or not, Generative AI will revolutionize tens of millions of creative and knowledge-based jobs and play a vital role in driving future efficiency and value.
What is Generative AI and How Does it Work?
To begin, let us first get the terminology straight. What is Generative AI and on which models is it based? So generally speaking, Generative AI uses existing content as source material, such as text, audio files, images, or code to create new and plausible artifacts. Underlying patterns are learned and used to create new and similar content. This differentiates from well-known Analytical AI, which analyzes data, identifies patterns, and predicts outcomes. One could say, Analytical AI mimics the left brain of humans, that is said to be more analytical and methodical, while Generative AI mimics the right brain – the creative and artistic side. Moving past the automation of routine and repetitive tasks, Generative AI is able to replicate capabilities that to date have been unique to humans – inspiration and creativity.
Moving on to the modeling types. To produce new and original content, Generative AI uses unsupervised learning algorithms. They are given a certain number of parameters to analyze during the training period. The model is essentially forced to draw its own conclusions about the most important characteristics of the input data. Currently, two models are most widely used in Generative AI: Generative Adversarial Network and Transformer-Based Models.
Generative Adversarial Networks (GANs)
A Generative Adversarial Network or GAN is a machine learning model that places the two neural networks – generator and discriminator – against each other, therefore called “adversarial”. Generative modeling tries to understand the structures within datasets and generates similar examples. In general, it is part of unsupervised or semi-supervised machine learning. Discriminative modeling on the other hand classifies existing data points into respective categories. It mostly belongs to supervised machine learning. One could also say the job of the generator is to produce realistic images (or fake photographs) from random input, while the discriminator attempts to distinguish between real and fake images.
In the GAN model, the two neural networks contest one another, which takes the form of a zero-sum game – one side’s gain being the other side’s loss. Currently, GANs are the most popular Generative AI model
Transformer-Based Models
The second model widely used in Generative AI is based on transformers, which are deep neural networks that learn context and meaning by tracking relationships in sequential data. An Example would be the sequence of words in a sentence. NLP (Natural Language Processing) tasks are a typical use case for Transformer-Based Models.
Context is provided around items in the input sequence. Attention is not paid to each word separately, but rather the model tries to understand the context that brings meaning to each data point of the sequence. Furthermore, Transformer-Based Models can run multiple sequences in parallel, thereby speeding up the learning phase significantly.
Sequence-to-sequence learning is already widely used, for example when an application predicts the next word in a sentence. This happens through iterating encoder layers. Transformer models apply attention or self-attention mechanisms to identify ways in which even distant data elements in a series influence on another.
How Generative AI Will Transform Creative Work
Narratives and Storytelling in general as a form of engagement will remain powerful, as humans are inherently drawn to stories – be it about a person, business, or an idea. However, good storytelling is difficult and requires content creation in different formats. While we see plenty of other areas being automated and made more efficient, the process of content creation remains manual and quite complex.
Generative AI will help content creators by generating plausible drafts that can function as a first or early iterations. AI will also help by reviewing and scrutinizing existing human-written text with regard to grammar and punctuation to style and word choice and narrative and thesis. By creating content that seems to be made by humans, Generative AI will be able to take over some part of the creative processes that until now only humans were capable of. Generative AI will be able to review raw data, craft a narrative around it, and put together something that’s readable, consumable, and enjoyable for humans.
Previously, Generative AI was mostly known for deep fakes and data journalism, but it is playing an increasingly significant role in automating repetitive processes in digital imaging and audio correction. In manufacturing, AI is being used for rapid prototyping and in business to improve data augmentation for robotic process automation (RPA).
Generative AI will be able to reduce much of the manual work and speed up content creation. Most likely, every creative area will be impacted by this in one way or another – from entertainment, media, and advertising, to education, science, and art.
Challenges and Dangers
While Generative AI brings enormous potential and the steps taken forward this year are truly astonishing, there is the danger of misuse. As with every technology, it can be used for both good and bad. Copyright, trust, safety, fraud, fakes, and costs are questions that are far from resolved.
Violent imagery and non-consensual nudity, as well as AI-generated propaganda and misinformation, are a real danger. Apparently, Stable Diffusion and its open-source offshoots have been used to create plenty of offensive images, as more than 200,000 people have downloaded the code since it was released in August, according to Stability.ai.
Pseudo-images and deep fakes can be misused for propaganda and misinformation. With more and more applications being publicly available to all users, such as FakeApp, Reface, and DeepFaceLab, deep fakes are not only being used for fun and games, but for malicious or even criminal activities too. Fraud and scamming is another problem, as well as data privacy, as for example health-related apps run into privacy concerns on individual-level data
Also, due to the self-learning nature of Generative AI, it’s difficult to predict and control its behavior. The results generated therefore can often be far from what was expected.
As with AI in general, machine learning bias is a tremendous problem in training data in Generative AI. AI bias is a phenomenon in which algorithms reflect human biases, due to the biased data which was used in training during the machine learning process. An example would be if facial recognition algorithm recognizing a white person more easily than a non-white person because of the type of data that has been used in the data training.
Therefore, we need to be sensitive to AI bias and understand that algorithms are not necessarily neutral when weighing data and information. These biases are not intentional, and it’s difficult to identify them until they’ve actually been programmed and poured into software. Understanding these biases and developing solutions to create unprejudiced AI systems will be necessary to ensure, existing biases and forms of oppression are not perpetuated by technology.
Despite the different challenges, technology would be incapable of developing and growing without challenges. Responsible AI gives way to avoid such drawbacks of innovation to a certain degree, or even eliminate them altogether.
What Founders and Investors Should Prioritize When Building & Scaling a Generative AI Startup
Research & Development: As so much regarding Generative AI is still in its infancy, research and development will have to be prioritized in any startup that wants to push the envelope in this area. A strong research team with sufficient senior roles with multiple years of experience in Machine Learning will have to set the basis on most cases. With a strong dedication to facilitating focus within research and accelerating research efforts, AI startups can differentiate against competitors and gain a competitive edge.
Modeling and Product Management: Building up a mature product organization is key for the commercialization of companies in the space. Strong product management competence with in-depth technical understanding is of the essence when operationalizing an AI business strategy. Implementing a product framework that supports the growing engineering organization and sets clear priorities should be on the to-do list as well. Investors should specially focus here from a Series A onwards, since most scientific founder teams in the space lack productization experience and need to hire experienced product leaders. This should be accounted for rather early in the process
Security and Compliance: Both need to be a priority. It is important to actively track and manage any security vulnerabilities in the system. Guidelines to fulfill the necessary compliance and security requirements should be defined and implemented to achieve production-readiness. This is important particularly in a governance context, but also in general.
Responsible teams need to be aware of and understand the security requirements. There needs to be visibility over changes made to critical infrastructure, so possible malicious changes do not only become noticeable when they start affecting end-users. The tech organization should be able to quickly respond to security incidents in an automated way. Otherwise, detecting and resolving issues would need considerable manual effort. With startups and young companies with only loosely defined processes that often are still manual, this can become a security risk that needs to be on the radar.
Scalable Infrastructure: Generative AI startups should build a secure, scalable and automatically provisioned infrastructure that is easy to manage and controls the cost of computing and data training. The AI models described above require a lot of computing power, since the more combinations they try, the better the chance to achieve higher accuracy.
As startups and growth companies are competing in the Generative AI space, they are under pressure to improve data training and lower the cost of it. In addition, the carbon footprint of data training is an important factor in times in which impact is becoming an increasingly important measurement for investors. AI companies therefore need to strive for more efficiency in training methods as well as in data centers, hardware and cooling.
There should also be a plausible trade-off between the cost of training models and using them. If models will be used many times in its lifetime, they can bring a proper return on investment of the initial training cost and computing power.
Conclusion
With Generative AI, content creators will have technology at their disposal that will be able to present artifacts from the data and use it to generate new content that can be considered an original artifact.
Generative AI will increasingly be important in the creation of synthetic data that can be used by companies for different purposes and scaled throughout different formats. AI-generated synthetic audio and video data, derived from texts which were triggered by some initial human input, can remove the need to manually shoot films or record audio: Content creators can simply type what they want their audience to see and hear and let Generative AI tools create the content in different formats.
We believe that Generative AI will progress quickly with regard to scientific progress, technological innovation, and commercialization. While we are still at the beginning of this trend, a wide range of appliances is on their way and plenty of use cases are being introduced to the market – ranging from media and entertainment, to life sciences, healthcare, energy, manufacturing and more. Innovative startups tackling problems around manual and time-consuming processes in the creative industry stand at the heart of this development, alongside established platform companies such as Google and Meta. Generative AI will extend into the metaverse and web3, as they have an increasing need for auto-generated synthetic and digital content.
Safety concerns and harmful use of Generative AI, such as deep fakes, pose a challenge and might impact mass adoption with consumers and corporations. Security and compliance guidelines will have to take the growing challenge of bias and general importance of Generative AI governance into account.
As with other types of AI, repetitive and time-consuming tasks will be automated, eliminating certain portions of tasks and activities that are currently done by humans. However, instead of eliminating creative jobs, Generative AI most likely will rather support processes in the creative industry through automation, while there will still be a human in the loop as a controlling and refining instance at some point. As an assistive technology that helps humans produce faster, we will see humans and AI work together for better and possible more accurate results.
Philipps & Byrne Provides Tech Due Diligence on Lumiform Ahead of €6.4M Round
25.7.2022
Lumiform raised €6.4m in a Series A round led by Capnamic.
Lumiform offers an out-of-the-box application that helps businesses automate the workflows of their deskless workforce across all industries.
Philipps & Byrne supported the investment by providing the technology and product due diligence.
Early Stage TechDD: Seatti Raised Seed Round With Acton Capital
5.7.2022
Seatti raised a seed round with Acton Capital. Philipps & Byrne was entrusted to provide the technology and product due diligence ahead of the funding round.
Seatti aims to enable every company and individual to work hybrid effortlessly. With Seatti’s MS-Teams and Azure-AD integrated solution, users can book shared desks, meeting rooms, parking and more. Users can also share their work locations with each other, see who is nearby and meet up.
Tech Due Diligence Ahead of M&A: HomeToGo Successfully Acquires SECRA Bookings Gmbh
9.6.2022
HomeToGo successfully acquired SECRA Bookings GmbH. Ahead of the acquisitions, Philipps & Byrne was requested by HomeToGo to assess SECRA from a tech and product perspective, to ensure compatibility and get an expert evaluation for the investment decision.
HomeToGo is the marketplace with the world’s largest selection of vacation rentals, listing millions of offers from thousands of trusted partners, including Booking.com, Vrbo and TripAdvisor.
SECRA Bookings GmbH offers modules and products to professionalize the online marketing of vacation accommodations for more reach, more bookings, and more guests.
TechDD on IoT Platform: Equipment-as-a-Service Provider Synctive Secures Investment from Capnamic
1.6.2022
Synctive got Capnamic on board as an investor to move forward on offering new possibilities in the mechanical engineering industry.
As we are always excited about IoT and engineering, we were a good fit to support the deal by providing the product and technology due diligence.
Synctive enables machinery manufacturers to launch and scale their equipment-as-a-service business model. Synctive is the all-in-one management software designed for successful machine-as-a-service business models.
Technology Due Diligence on Nature Intelligence Startup NatureMetrics Ahead of £12 Round Co-led by Ananda Impact Ventures
26.5.2022
UK-based nature intelligence company NatureMetrics closed a £12 round co-led by Ananda Impact Ventures as well as 2150, SWEN Capital Partners and BNP Paribas’ Solar Impulse Fund with follow-on from Systemiq Capital.
Philipps & Byrne was part of this journey as partners for the technology and product due diligence, building on our expertise in climate and sustainability tech.
NatureMetrics brings the power of genetics to frontline ecology. They use eDNA analysis to monitor biodiversity and measure natural capital in the environment by uncovering multiple species from complex environmental samples in low-cost and repeatable ways.
Cloud Collaboration Governance: Philipps & Byrne Provides TechDD Ahead of 4 Mio US$ Funding Round on Rencore
18.5.2022
Rencore raised 4 million US$ in a series A funding round led by venture capital investor Capnamic. We supported the investment by providing the technology and product due diligence – not only as audit experts for SaaS, but also on regulatory compliance and governance.
Rencore is a B2B software company providing solutions essential for staying in control of Microsoft Office 365, SharePoint, Teams, Azure, and the Power Platform. Their customers rely on Rencore tools to simplify, automate and speed up their everyday governance, risk, and compliance challenges.
Shift Planning Startup Ordio raised €2.9 M in Seed Round – Philipps & Byrne Supports with Tech DD
10.03.2023
Cologne-based shift planning startup Ordio closed a €2.9 M Seed Round with Capnamic and Simon Capital. Ahead of the deal, Philipps & Byrne provide the Tech Due Diligence.
Ordio integrates productivity tools with an easy-to-use interface for your deskless workforce. Their app empowers deskless employees to manage their schedules, request time off, pick up extra shifts, and access crucial information.
Tech Due Diligence on Edtech Startup: Edurino raised €10.5 million Series A
05.03.2023
Props to Edtech Startup Edurino raised €10.5 million in Series A funding round. The round was led by DN Capital with participation from FJ Labs and Tengelmann Ventures. Existing investors Jens Begemann, btov and Emerge Education are on board as well. Philipps & Byrne worked with the team during the Product and Technology Due Diligence.
Edurino introduces children from the age of 4 to digital learning in a playful and responsible way and prepares them for school. The offering includes a combination of physical toy and digital learning app and invites children on a story-based journey into the playful world of learning.
Legal Tech Startup raises Series A – Philipps & Byrne Support Deal with Tech DD
04.03.2023
Henchman raised a $7 million Series A led by Adjacent VC and Acton Capital as well as Conviction VC and several business angels, including the Collibra, Showpad founders, VLAIO – Flanders Innovation & Entrepreneurship, F3 Finance and Pitchdrive. The Tech DD was provided by Philipps & Byrne.
Henchman enables legal professionals to draft and negotiate complex contracts faster by automatically extracting previously written clauses from their existing contract database.
Tech DD at Impossible Cloud – The First Web3-Based Cloud Storage Solution
02.03.2023
HV Capital led a €7m seed round at Impossible Cloud, alongside 1kx, Protocol Labs, TS Ventures and very early Ventures. Philipps & Byrne provided the Product and Tech Due diligence ahead of this deal. Impossible Cloud is the first enterprise-grade cloud storage solution based on web3 technology.
Healthtech Company DrDoctor Secures £10m Investment – Philipps & Byrne Provides Tech DD
23.02.2023
British Healthtech Company DrDoctor has secured £10 million in funding, led by YFM Equity Partners, alongside Ananda Impact Ventures and 24Haymarket. Philipps supported the investments as tech advisory partner by conducting the product and technology due diligence. DrDoctor’s platform empowers clinicians to make data-driven decisions and enables millions of patients to self-book appointments.
Merger of Habyt and Common Creates Industry Leader in Shared-Apartment Business – M&A TechDD Provided by Philipps & Byrne
12.01.2023
Philipps & Byrne was entrusted with conducting the Tech Due Diligence ahead of the merger of Shared-Apartment Companies Habyt and Common. With over 40 cities and 14 countries across 3 continents, Habyt and Common will jointly operate over 30,000 units including co-living, studios, and traditional rental apartments, becoming the industry leader.
Habyt develops and manages community-driven and technologically empowered co-living spaces. Common is a global residential manager offering shared apartments for thousands of residents across coliving, microunits, and traditional apartments.
Fortino Capital Concludes Private Equity Investment in Symbioworld – Philipps & Byrne Provides Tech Due Diligence
20.12.2022
With Symbioworld, Fortino Capital concludes their first Growth Private Equity Investment in Germany. Philipps & Byrne conducted the tech due diligence during this PE deal. Symbioworld is a leading software company based in Munich, which has set itself the goal of actively and professionally managing and optimizing the business processes of its customers.
Philipps & Byrnes Supports €1.2 Mio Pre-Seed Round on Voxalyze
06.12.2022
Voxalyze raised €1.2 Mio in a pre-seed funding round with Capnamic and seed + speed Ventures. Philipps supported the investments as tech advisory partner by conducting the product and technology due diligence.
Voxalyze is the first Podcast Visibility Analytics solution, powering search engine optimization for podcasts on platforms. They aim to help audio content creators meet and increase their audience with data and insights.
Responsive AI Platform Provider QuantPi Assessed by Philipps & Byrne Ahead of €2.5 Mio Funding
10.10.2022
QuantPi raised a €2.5 Mio funding round with Capnamic alongside First Momentum Ventures, New Forge, and Ash Fontana.
We worked with the team as tech advisory partners for the product and technology due diligence ahead of this round.
The QuantPi platform helps eliminate the uncertainty that surrounds delivering AI systems by bringing quality control to every step of the development process. Enterprises can ensure that legal, commercial, and reputational risks related to their AI solutions are identified, assessed, and mitigated.
AI Platform for B2B Commerce Transaction: Tech Due Diligence on Workist
29.9.2022
Workist raised a €9 Mio Series A led by Earlybird Venture Capital.
Philipps & Byrne was on board as tech advisory partner for the product and technology due diligence ahead of this round.
Workist automates B2B transactions around the world to end manual document processing.
€12 Mio Funding to Scale AI Platform: Philipps & Byrne Conducts TechDD on Klaus Ahead of Series A Round
29.9.2022
Klaus raised €12M Series A to scale their AI platform, transforming customer support. Philipps & Byrne conducted the Product and Technology Due Diligence ahead of the funding round.
The Series A round of equity funding is led by Acton Capital. Joining the round were previous investors Icebreaker.vc, CREANDUM, and Global Founders Capital.
Companies use Klaus’ customer service quality management platform to run an effective QA process, coach agents and boost customer retention.
TechDD on Digital Marketplace Startup Timberhub Ahead of €5.8 Mio Funding Round
27.9.2022
Timberhub secured €5.8m in funding, aiming to establish wood as the building material of the 21st century and to drive decarbonization.
Philipps & Byrne provided the Product and Technology Due Diligence ahead of the round, which was led by HV Capital and CREANDUM alongside support from existing investors Speedinvest and the sennder founders.
Timberhub wants to redefine timber trading by building the largest digital marketplace that actively connects buyers and sellers internationally.
Instant Commerce Raises €5.4 Mio – Philipps & Byrne Provides Tech Due Diligence
22.9.2022
Instant Commerce raised a €5.4M seed funding round led by our client HV Capital, alongside Hearst Ventures and firstminute capital. Philipps & Byrne worked with the team as tech advisory partner for the product and technology due diligence ahead of the round.
Instant Commerce is a storefront builder for headless commerce that enables eCommerce brands to build superior online shopping experiences, fast and easy, with best-in-class technology.
Seed TechDD on Decision Intelligence Startup Paretos Ahead of €10 Mio Round
20.9.2022
SaaS startup for decision intelligence paretos extends their seed round to €10 Mio Euro. Investors include UVC Partners, LEA Partners GmbH, Fabian Strüngmann, Interface Capital with Niklas Jansen and Christian Reber, Hannes Ametsreiter, and others. Philipps & Byrne supported the deal by conducting the product and technology due diligence.
Paretos is an AI-based decision intelligence platform for effective, data-driven decision processes. It enables companies to quickly and reliably analyze complex data, generate optimized forecasts and decision proposals, and derive target-oriented measures – thanks to a clear no-code user interface and simple integration solutions, even without any prior data science knowledge.
TechDD Ahead of Series A Round: Skribble Secures CHF 10 Mio With Action Capital
6.9.2022
Together with VI Partners, btov Partners, Die Mobiliar, Helvetia Venture Fund and Zürcher Kantonalbank, Acton Capital invested in Zurich-based startup Skribble in a CHF 10 Mio Series A. Philipps & Byrne contributed to this successful round by conducting the product and technology due diligence.
By offering a secure digital signature process that caters to legal written form requirements, Skribble today already serves 3000+ clients in more than 30 countries and plans to expand its services across Europe.
Product and Tech DD Ahead of €34 Mio Round on Digital Health Insurer Ottonova
5.9.2022
Ottonova secured a capital increase of €34 Mio with Cadence Growth Capital as lead investor, together with existing investors HV Capital, Tengelmann Twenty-One KG, btov Partners, Earlybird Venture Capital and Vorwerk Ventures.
As we have audited Ottonova before and supported their growth journey for some time, we were entrusted once again with conducting the product and technology due diligence ahead of this funding round.
Ottonova is a digital health insurance company that wants to make the complex topic of health insurance and healthcare simple and transparent.
Tech and Product Due Diligence on Care Startup Marta Ahead of Funding Round With Capnamic
1.9.2022
Marta secured funding from Venture Capital firm Capnamic. Philipps & Byrne provided the product and tech due diligence ahead of this funding round.
Marta offers a marketplace for families, people in need of care and European caregivers. They are working on building software solutions to connect caregivers with families all over Europe and to accompany them during the care. They are pursuing the goal of doing better through their software solutions and fundamentally rebuilding the market of “24-hour” care.
For more news please visit our News Archive.
Before the summer is officially over, we wanted to take the chance to party with old and new friends from the Startup and Venture Capital scene. Together with our partners from Torq.Partners, Cremanski & Company, Moss, Sastrify, and Thryve, Philipps & Byrne came up with Berlin Vice – a summer closing boat party which was all about minimum business talk and maximum fun and party.
On September 8th, VCs, founders, and executives from tech tech startups came together to have a good time and experience Berlin by night on waterways. We created an event for everyone to celebrate, exchange, and network with their peers. Many embraced our motto Berlin Vice – in good old Miami Vice tradition – and got creative with our (voluntary) dress code. From magicians to DJs – we all enjoyed the entertainment on board, the flying buffet, and the open bar.
And while it was mostly fun and games, Berlin Vice showed us once again how closely intertwined the VC and the Startup scene is. Socializing and fruitful exchange is so important – on every level. This is something we see in our everyday work as well. Our findings oftentimes are something of a conversation starter, facilitating honest discussions about sometimes uncomfortable truths between VCs and founders.
We totally believe that meeting on an authentic human level helps to build real trust and allows people to have these important conversations. We want to create a platform for exactly that, whether it’s at a tech due diligence, a health check or on a boat tour.
Zero Bullshit
We commit to a strict zero bullshit policy! With us you get an honest assessment of strengths, weaknesses, risks, and opportunities, and constructive recommendations to grow in the future.
True to Tech
We come from a strong tech background and provide you with product and technology expertise from classical hardware and software to next-frontier and deep tech.
Tech Analysis from 360°
Tech happens in a business context: Guided by the strategic perspective, we conduct true end-to-end assessments – from teams and leadership all the way to hardware and code.
Standardized, Comparable, Repeatable
Rooted in industry best practices, we offer in-depth tech due diligence and health checks supported by data that help you benchmark, build, and scale your company.
Last year there was a lot of talk in the startup scene about having a CTPO: Meaning a CTO (Chief Technology Officer) and a CPO (Chief Product Officer) unified in one person. To be honest, this really is not entirely new – this kind of unified role has always existed before in the startup scene and everywhere else as well. However, for a specific time something happened that I would call a trend. Why? Because everybody was talking about the CTPO thinking it is a great new idea. It was a little bit like what happened with Agile or the Spotify model. Neither of those were entirely new ideas, and they were based on certain principles that already existed. But people quickly adopted what were perceived as novel approaches, without reflecting on them deeply. And I think that is a mistake! While the debate has kind of blown over a little, the question whether to have a CTPO or separate CTO and CPO roles is still very relevant for young companies building their organizational setup. It is something that we continue to encounter on a regular basis. That is why I would like to revisit the topic and comment a bit on this CTPO model.
Like Being In a Good Marriage
While I think in some setups and at certain stages this unified function of a CTPO can make total sense, I think it is a bit risky in other contexts or with certain people. Product and tech, although being closely related to each other, usually have very different angles on how they perceive the world in general and the business in particular.
Product management people are usually very business and user oriented. In their position, it is more about the Why. Tech people on the other hand are traditionally more focused on the How. Certainly, that has changed over the last twenty years or so, and we can see the roles having become closer to one another. But still: The emphasis on the business, the capabilities of calculating a business case and thinking strategically in terms of business and product strategy; that is still something product people are usually much better at than tech leaders.
The reason why people at early-stage startups want to combine the CTO and the CPO into a joint CTPO role is, first of all, the (false) hope for a reduced budget. Unfortunately, in most of the cases, this is an illusion because any candidate who really lives up to the expectation is usually very expensive.
The second reason is that you naturally want to bridge the gap between the tech and the product organization – and that makes total sense. After all, you want to have the two as closely together as possible. So there is the expectation that if you have one person leading both tech and product management, that they will be uniting those two teams, and you will not have that gap. However, I think that is a little bit of an illusion as well. Almost every person that I know in one of those roles has a certain preference, a certain background, and level of expertise. Usually, you are either best at one or the other.
Oftentimes when a person who is technically very strong takes over that CTPO role, they have a certain bias towards technical decisions. That can be unconsciously, which makes it even more dangerous. This holds true the other way around as well: If you have someone who is very strong at product, but maybe tech is their weak spot, the decisions they make are oftentimes more in favor of product and business and sometimes tech does suffer.
So revisiting this topic, I personally am still very much a fan of having two people in those two roles. In a well-working setup, it is like being in a good marriage. You are fighting here and there, and you do not always have aligned interests, but you figure out a way. It is a constructive fight that you are having – more like wrestling. And in the end it is a joint effort, and you achieve a shared goal. So, if you have one CTO and one CPO then you always have a sparring partner whether you want it or not – again, like in a marriage. And they remind you of something you tend to forget.
Before You Know it, Complexities Can Become Overwhelming
If you have a very early stage company with a small team size and a very narrow focus to look at, and you are currently in the process of building a prototype and MVP, a very early first version – fine, have a CTPO model. No problem at all. You will be able to handle the context, team, technology, product and everything else at once. But as the company grows, the complexity grows as well. Naturally, you will have to deal with a bigger product and technology scope as well as team size. And of course, you also need to take care of the market: Do you have a good product-market fit? Do you have the right business and product strategy? And how do they align with one another?
Suddenly you will be dealing with a lot of topics, which can quickly become overwhelming. To be honest, most CTOs and CPOs that I know are not entirely capable of handling all of that or even overseeing it by delegating it to the right people. Usually they are pretty good at what they are doing, but they are also challenged with the daily issues, topics, and requirements you have in a fast-growing startup.
So in the first phase of a startup or if you are a scale up that is well established, and you have the budget to hire a top-notch CTPO or someone with plenty of experience, seniority, and strategic knowledge, then the joint role can make sense. Go for it! But in the phase in between, I am not sure about whether this is the best solution. So I would say take a closer look at your organization and the requirements. Look at the potential candidates for those separate roles or such a unified role. I would like to remind every CTO and CPO to really do some soul-searching whether this is the next step of development. Can you already master a CTPO role easily, or would you like to focus a little bit more on improving your skills in your core domain or expertise before you make that next step – because it will be a lot!
Find Out What Works For You!
So is the CTPO role right for you – or separate CTO and CPO roles? In the end, as always in life, it depends. Of course, there are people and setups where a CTPO model works best. This is the ideal. But let’s face it: Many people in Tech and Product leadership positions are already challenged with one of those roles. Both are demanding positions which require lots of skills. The war for talent is already incredibly tough, even if you just look for either a decent CTO or a capable CPO.
At the end of the day, you have to be honest. The CTPO role is not for everyone. If you find one of those gems who are equally capable of both Product and Tech, and it works for your setup, consider yourself extremely lucky. Gems are rare for a reason. But do not necessarily think that this has to be the way and blindly follow that trend. Because in many cases a team – and it is a team sport after all – of CPO and CTO works much better than an overwhelmed or even biased CTPO.