Business

Why Trustworthy AI is Crucial for Business Growth

As an aerospace engineer, I learned the importance of trust. Businesses must apply the same principles to AI integration for growth and safety.

By Adam Markowitz4 min readOct 25, 20259 views
Share

During my tenure as an aerospace engineer on the NASA Space Shuttle Program, the concept of trust was paramount. Each component—from the tiniest bolt to complex lines of code—had to undergo rigorous validation and testing to ensure that the shuttle could safely embark on its journey. After each mission, astronauts would walk through our offices, expressing gratitude to the thousands of engineers who played a vital role in their safe return to their families. This deep-rooted culture of trust and safety was integral to our operations.

In the tech world, despite the prevalent mantra of "move fast and break things," the need for trust is equally critical. New technologies must cultivate confidence before they can truly drive growth.

Forecasts indicate that by 2027, approximately 50% of enterprises are expected to deploy AI agents. Furthermore, a report from McKinsey suggests that by 2030, AI agents could conduct up to 30% of all work. Many cybersecurity leaders I engage with are eager to integrate AI rapidly to empower their businesses, yet they also acknowledge the necessity of implementing this technology in a safe and secure manner, complete with appropriate safeguards.

For AI to realize its potential, business leaders must develop a foundation of trust in these systems. This won't occur spontaneously. Security professionals need to draw lessons from the field of aerospace engineering and embed trust into their processes from the very beginning—or risk missing out on the accelerated business growth that AI can facilitate.

The connection between trust and growth isn't just theoretical; it's something I've experienced firsthand.

Building a Business on Trust

After the conclusion of NASA’s Space Shuttle program, I took the leap to establish my first company—a platform designed for professionals and students to showcase their skills and competencies. While the concept was straightforward, it required a significant level of trust from our customers. We quickly realized that universities were hesitant to partner with us until we demonstrated our ability to securely manage sensitive student data. This meant providing assurance via various methods, including obtaining a clean SOC 2 attestation, completing extensive security questionnaires, and navigating multiple compliance certifications through meticulous manual processes.

This experience influenced the inception of Drata, where my cofounders and I aimed to create a trust layer among reputable companies. By assisting Governance, Risk, and Compliance (GRC) leaders in asserting and demonstrating their security posture to customers, partners, and auditors, we eliminate barriers and foster growth. Our impressive journey from $1 million to $100 million in annual recurring revenue within a few years underscores that businesses are recognizing the value of this approach, gradually shifting their perspective of GRC teams from cost centers to vital enablers of business success. This shift has resulted in tangible outcomes—our SafeBase Trust Center has influenced up to $18 billion in security-related revenue.

The stakes are even higher now with the advent of AI.

Today's compliance standards and regulations, such as SOC 2, ISO 27001, and GDPR, were crafted with data privacy and security in mind, not for AI systems that autonomously generate text, make decisions, or act independently.

Legislation like California’s newly enacted AI safety standards indicates that regulators are beginning to catch up. However, merely waiting for new laws and guidelines is insufficient, especially as companies increasingly depend on innovative AI technologies to maintain their competitive edge.

You Wouldn’t Launch an Untested Rocket

This moment in the tech landscape reminds me of my days at NASA. As an aerospace engineer, I never operated under the philosophy of "testing in production." Every shuttle mission was the result of meticulous planning and preparation.

In the realm of AI, we cannot afford to take shortcuts. The implications of deploying untested AI systems can be dire, just as the consequences of a malfunctioning rocket can be catastrophic. As businesses embrace AI, they must prioritize the establishment of trust within these systems, ensuring robustness and reliability right from the start.

To build a sustainable future with AI, we need to commit ourselves to a culture of trust and accountability—just like we did in aerospace. This approach will not only safeguard businesses but also unlock the potential for innovation and growth that AI promises.

Tags:

#Artificial Intelligence

Related Posts