The rate of AI development is ushering in a new Moore’s law with the development doubling every few years according to some experts.  According to Stanford, the rate of doubling is as frequent as every three months.  Irrespective of the actual rate of doubling, the compound growth is exponential and impressive. However, this growth comes with caveats with AI still suffering from a critical problem – differentiating between reality and hallucination when encountering data. Autonomous driving systems still miss pedestrians in some cases while conversational AI systems completely fabricate facts at times.  Probabilistic AI provides probable explanations of data, the means to update these proposed explanations in light of new information, and to estimate the quality of these explanations proposed to understand how well models are working and to improve them.  Normal Computing is a generative AI application development platform that uses probabilistic AI with a focus on reliability, accuracy, adaptivity, and auditability.  For real-world production cases, a simple mistake can have massive financial consequences; even more so dire in the case of applications in transportation or healthcare. Developed by members of Google Brain Team, Palantir, and X, Normal is focused on ensuring that applications can be developed with certainty that workflows are reliable, transparent, and most importantly accurate.

AlleyWatch caught up with Normal Computing CEO and Cofounder Faris Sbahi to learn more about the business, the company’s strategic plans, recently-announced round of funding, and much, much more…

Who were your investors and how much did you raise?

We raised $8.5M in Seed funding. The round was backed by Celesta Capital, First Spark Ventures, and Micron Ventures.

Tell us about the product or service that Normal Computing offers.

Normal is building a Generative AI application development platform for critical enterprise applications. The platform is intended to build workflows reliable enough for intricate and high-stakes real-world contexts like synthesizing financial recommendations in underwriting and generating tests for highly specialized code where a single mistake can cost an enterprise millions.

What inspired the start of Normal Computing?

Amongst major AI innovations – like scaling large language models and with GPUs – there often remains a significant gap between these new capabilities and the requirements for real-world production use cases where information is incomplete,  noisy, and constantly changing. The reality is that successful resolutions are typically rich and limited to the largest tech companies like Alphabet and Meta. We saw the same thing happening with the early resolutions we pioneered as the surfacing paradigm known as probabilistic machine learning.

We believe that there are at least two types of risks if these innovations are not emphasized and shared with the rest of the ecosystem, especially as AI systems begin to touch into areas like materials, nanotechnology, biology, and medicine. Either we cannot ensure these systems are misused because no one develops the technology fast enough, or we are entirely dependent on large tech companies because they are the only ones that have it.

We also discovered something else related to making probabilistic machine learning scale efficiently, but we aren’t ready to share details on this yet.

Our founding team comes out of Google Brain and X. During their time at these companies, they were responsible for applying probabilistic machine learning to some of the largest-scale and most mission-critical production systems at Alphabet.  This led to watershed revenue and higher quality due to the reliability and real-time decision-making improvements it brought to improving and unlocking AI systems.

The founding team also includes founders from Tensorflow Quantum and Probability, who have now teamed up with much of the talented Probabilistic ML ecosystem. This includes the leaders from Meta’s disbanded Probability organization like our ML Lead, Thomas, and Los Alamos National Lab’s former head of quantum AI, Patrick.

The founding team left Alphabet based on the belief that they could bridge these same kinds of advantages due from probabilistic machine learning to Generative AI.

How is Normal Computing different?

Normal Computing’s Probabilistic AI enables unprecedented control and scale of reliability, adaptivity, and auditability to AI models.

In response to a question like “What recommendations would you provide for my client thinking to save for their kid’s college?,” a typical Large Language Model (LLM) deployed to assist a financial advisor by synthesizing across various data portals and policies might make up (hallucinate) or provide out-of-date or impersonal details that are critically relevant to decision-making. As well, it may fail to provide transparent reasoning that would be needed for audit. In contrast, with Probabilistic AI, models can detect when they synthesize inaccurately by also generating probable, auditable explanations of how they reached a conclusion, and even revise themselves by adaptively making an additional query to a datastore or human-in-the-loop.

What market does Normal Computing target and how big is it?

Normal is initiating pilots with Fortune 500 companies across multiple verticals, now targeting key sectors like semiconductor manufacturing, supply chain management, banking and government agencies.

What’s your business model?

Right now, we’re focused on Enterprise B2B. We’re committed to working collaboratively with our clients to enable applications that routinely involve multiple stakeholders, a complex data landscape, and sophisticated security policies.

How are you preparing for a potential economic slowdown?

We are being thoughtful about our capital allocation. We believe that our work serves as much of a critical function in a slowdown as in the alternative. This is because reliable AI systems can serve a key role in improving operational efficiency for enterprises by augmenting their workforce to make the best decisions and automate repeatable processes.

What was the funding process like?

It was a lot of fun, it was like speed dating to ultimately find our superteam of investors. The key was finding folks who really believed in our short- and long-term vision.

What are the biggest challenges that you faced while raising capital?

As first-time founders, you don’t exactly know where to start. At first, it feels like a bit of a random walk, going from intro to intro. And then you realize you’re getting closer. And then you’ve raised your round!

What factors about your business led your investors to write the check?

A big vision that aims to solve a critical problem for enterprise and society at large. And a team that has the passion, drive, and skills to go after it thoughtfully and effectively.

What are the milestones you plan to achieve in the next six months?

We’ve been building out the team and bridging folks from various walks and paths. This has been a major part of the excitement. What we’re doing requires bridging folks from different spaces that typically don’t intersect – from academia to physics and computer science.  This is one of the powerful facets of being a full-stack company: the interdisciplinary nature of the work.  Experts with a track record of enabling strategic advantages for use cases where risk has been a central barrier to AI adoption across Fortune 500 companies has also been crucial.

Right now, our focus is on working closely with our clients to succeed on our enterprise pilots and iterate on our core application development platform so that it’s really bridging immediate near-term value on some of the hardest problems in the space.

What advice can you offer companies in New York that do not have a fresh injection of capital in the bank?

Keep after it and stick to your core vision. Otherwise, for the details in between, keep an open mind and ears open. With many problems, one of the best things you can do is reach out to folks that have tried similar journeys before. Some of our advisors have served a massive value-adding function by sharing their lessons and helping us learn to quickly improve our approach. This includes folks like Suraj Bramhavar at Sync Computing, Will Zeng formerly the quantum lead at Goldman Sachs, Chiu Chau the former CEO of OpenTrons, and Susannne Balle from Intel.

Where do you see the company going now over the near term?

We’re growing thoughtfully, investing in iterating on the MVP, and scaling out our engagements.

What’s your favorite summer destination in and around the city?

In the summer, I really like to stay around the city for the most part. Being active and spending time in the parks is great, especially Prospect Park. One of the great aspects of New York is the variety of folks you get to meet. In the AI space – and even in the general entrepreneurial space – we have a pretty tightly-knit community. We do an awesome job hosting programming like hackathons, rooftop hangouts, barbecues, and intimate dinners. You get to meet a wide diversity of folks – it’s what New York does best!

I find myself having a harder time sticking around in the winter when the weather changes! I like to hang around Latin America and other places where I speak the local language like the Middle East.


You are seconds away from signing up for the hottest list in Tech!

Sign up today




Source link

Previous articleHow to Get the Best Price for Your Precious Metals
Next articleChoosing between buy now, pay later & your credit card

LEAVE A REPLY

Please enter your comment!
Please enter your name here