Flying into San Francisco is an exercise in remembering. There are still the same ads for software you’ve never heard of. There are still green juice bars selling shot glasses of bitter slime for $9. There are still aggressive amounts of athleisure. There are still the 10% too loud overheard conference calls, where entrepreneurs pitch their vision of the future. There are still the drab office buildings hiding the technology magic happening within. There are still venture capitalists roaming around, their fangs dripping with management fees and the occasional third spouse who used to be their personal assistant. There is still that unique combination of techno-optimism and pragmatic capitalism that says technology will bring about utopia while simultaneously producing judicious cash flows.
San Francisco still is, as it has been for the last 20 years, a potent cocktail of hope and hedonism, hackers and haters, visionaries and vampires. It is a place where the future is made.
Now, the Bay Area is riled with AI fervor. The long-awaited arrival of our robot overlords has potentially come with the explosive invention of AI tools that can easily generate text and images. Previous tech waves have been fiscally lucrative but did less to advance humanity than hoped. Crypto has been a bust, B2B software was and is and always will be boring (while useful), but AI could be the technological revolution that has been promised to us since the smartphone era.
AI is so seductive because using these recent advances feels like magic. Five years ago, it would have taken a team of engineers a few years and millions of dollars to build a fairly primitive AI product. Now, with advancements made by firms like OpenAI, anyone with a middling understanding of code can use cutting-edge AI. Thought leaders have been disseminating this AI opportunity narrative through newsletter and blog post I.V. drip lines, and all of their pontifications argue the same thing: AI is for real this time.
These writers—myself included—have helped drive the excitement of engineers to a fever pitch. Every tinkerer I know is building an AI product right now, and even the roughest of demos are breathtaking. I gasp as I scroll Twitter, overwhelmed by new products I didn’t even know were possible, all being built as side projects. Here at Every, we’ve utilized this new tech to build a Google Docs competitor and a chatbot that helps you find information from one of our favorite podcasts. We’re just a small media publication able to build products with capabilities that would’ve been impossible six months ago. When you contemplate the entirety of the technology industry doing this sort of hacking, the possibilities become mind-blowing.
To observe this phenomenon, I returned to San Francisco, my home of many years, to attend an AI hackathon organized by HF0, a residency program for top technical founders. I wanted to watch AI techno-awe in person, and (full disclosure) they gave me free meals and a place to crash.
What is AI?
Over the week, I asked more than 30 people one question: what is AI?
Surprisingly, no one actually agreed on an answer. I compiled a list of some of the definitions that I heard:
- A nice man wearing three items of clothing with three different startup logos on them told me that AI is math that dumb people can’t understand.
- A person with neon-bright blonde hair told me that AI is math no one understands.
- A former Google engineer told me AI is software that makes you “shit your pants in fear.”
- A man with a slick goatee told me that AI was a god we think we can tame (I rather liked this one).
- A woman with a laptop covered in a variety of stickers told me that AI was a product that required more than 50% of your staff being PhDs to build.
This motley assortment of answers is fascinating because for many years we had a fairly standard benchmark on what AI was. We used the Turing test, which tested intelligence on natural language processing comprehension. It is arguable that we are nearly at or have already surpassed this point. Turing defined intelligence as "The extent to which we regard something as behaving in an intelligent manner is determined as much by our own state of mind as by properties of the object under consideration. If we are able to explain and predict its behavior we have little temptation to imagine intelligence. With the same object, therefore, it is possible that one man would consider it intelligent and another would not; the second man would have found out the rules of its behavior."
Yet not once, in a week dedicated to the exploration of AI, did anyone mention Alan Turing. Instead, a much more expansive definition of AI seemed to form—one that went far beyond natural language processing to incorporate images, autonomy, robotics, and audio. AI is something much bigger to these hackers.
My best-guess definition of AI is:
AI is software that uses some form of machine learning to do things we didn’t think were possible.
This is less specific and less satisfying than Turing’s framing but probably more accurate to its current usage. If you were to cut any aspect out of that, it would be the machine learning component. The math and techniques are changing so rapidly that it’s fair to assume that in 10 years we’ll be using an entirely different architecture to build AI, but the “things we didn’t think were possible” is an ever-shifting boundary. In the near future many of the technologies that we consider AI, like image generation or self-driving cars, will be generic software capabilities. However, the boundary of possibility is permeable, always shifting towards more. The ultimate goal of AI is AGI—artificial general intelligence, or a piece of software that mimics or surpasses human capacity and capability. Part of the difficulty in writing about this sector is that people will say “AI” and mean some version of all the above.
There is also a big difference between AI research and AI products. No one I talked with at the hack week was trying to come up with their own novel research model. Instead, most were trying to apply OpenAI models or other open-source projects to distinct problems—email clients with GPT-3, puppy avatar photos, summarization of data sets, and the like.
My 11-month-old Aussie doodle puppy, Maple, rendered as a cyberpunk dog by one of the tools from the hackathon.
More interesting to me than what was being built was why.
A new vibe
The vibe of these hackers was different from what I had encountered before. Crypto events had a flavor of manic greed that permeated most conversations. B2B software events had “Aw, shucks” energy appropriate to people building boring things for fun. At this event, and in the conversations I held with AI startups during my visit, the vibe was something new.
Building in AI is equal parts awe and fatalism. There’s a sense that you’re building with digital uranium. It is powerful, it can bring about a new world order, and if we aren’t careful, it can kill us all.
In the last 20 years of building tech products, most of the creative destruction that occurred was caused by second-order effects: the people who were made redundant, the communities that were hurt, were far away from the builders of San Francisco. AI offers no such comforting blinders. There is a hyper-awareness that what you build can put someone out of work. And if you’re building something of a grander ambition like AGI, there’s a chance you destroy humanity, skynet-style. On the other hand, there is also a tender hope that AI could elevate humanity into a post-scarcity society that eases suffering and affliction.
These emotions blended together made for a more thoughtful hackweek versus others I’ve attended. There was still the careening, crazy energy that accompanies trying to build something within a week. People were visibly excited when telling me about plans for their products. However, in the quiet moments, in the times between frantic coding sessions, people sometimes expressed to me quiet concerns about AI. There was an earnest desire to do the right thing with this new tech.
To build in AI, you have to believe at some level that these underlying technologies will end up being net-good. The people I encountered at this event were earnest in that hope. So many previous technology cycles have left people feeling disappointed; maybe in AI, the promised messiah had come.
The most surprising language I encountered was religious. This group of highly intelligent, mostly atheist engineers used language bordering on the divine to describe their work. AI was “God.” AGI could cause “humanity to ascend to a higher plane.” AI, if gone wrong, “could turn earth into an apocalyptic hellscape.” When I asked follow-up questions about these statements, the relation to religion deepened: the hackers usually didn’t have evidence for these beliefs. This new technology was so mysterious, so powerful, that they usually used the language of godhood to describe it.
It wasn’t that they viewed themselves as members of a newly formed priesthood, though someone was building a chatbot meant to simulate a conversation with God. It’s more that whenever humans encounter something mysterious and powerful and semi-unexplainable, we have nothing but the language of spirituality to describe it. The least theistic of the individuals I met were typically those doing cutting-edge research at labs like OpenAI—people who described the AI by the formula used to derive its capabilities. The product builders and those interested in building companies were much more spiritual in their pursuits.
I had gone back to San Francisco unsure of what I would find. As a strategist, I figured I would end up writing my usual fare, analyzing the prospects of startups in the sector. What ended up drawing me in were fascinating people who were wholeheartedly throwing themselves at this new technology. If you believe as I do that AI is the future of the technology sector, then these people are the founders—or priests—of tomorrow.
Find Out What
Comes Next in Tech.
Start your free trial.
New ideas to help you build the future—in your inbox, every day. Trusted by over 75,000 readers.
SubscribeAlready have an account? Sign in
What's included?
- Unlimited access to our daily essays by Dan Shipper, Evan Armstrong, and a roster of the best tech writers on the internet
- Full access to an archive of hundreds of in-depth articles
- Priority access and subscriber-only discounts to courses, events, and more
- Ad-free experience
- Access to our Discord community
Comments
Don't have an account? Sign up!