At the beginning of 2021, I wrote a blog of some of the most interesting ML I had seen in 2020. 2022 has seen huge advances. The high-level recap of that blog was:
- AI/ML is coming…fast
- AI/ML is starting to get really good. In some cases, it was getting indistinguishable from human-created work. That means problems like impersonation are serious
- AI/ML has an opportunity to displace a huge portion of the economy. Yes, your job might be “at risk” — don’t discount that
- Many people thought art is/was “safe” from AI/ML, but it is/was not
Overview of 2022
If I were to classify 2022 in AI/ML, I’d say it was “the starting year of democratization of AI/ML.” AI/ML compute costs and tactics have made many AI/ML-driven endeavors practically available to the layperson where historically they’ve required thousands (or millions) of dollars of hardware costs and highly specialized ML engineers. To prove, this is democratization, try any of the following, many of which are discussed below:
- Go sign up for a ChatGPT account. Ask ChatGPT to write you a blog or write some code for you or answer a technical question for you.
- We just launched Vectara, which uses large language models to understand both the documents and what you’re searching for. And we have a free tier offering (and it’s otherwise cost-competitive to traditional keyword systems).
- You can generate an image of your choosing for free online (or on your own computer) with stable diffusion. DALL-E IS $0.02 at the highest price per image.
- GitHub Copilot is ~$10/mo to help you write software.
These are entirely affordable for highly skilled tasks! These costs are only going to come down and the quality is only going to get better.
ChatGPT
We should probably talk about ChatGPT first, since it’s been in the news most recently and arguably most significantly. It’s probably worth mentioning why I think that’s the case: yes, the technology is incredible, and yes, the way the application works is almost contagious, but I think the biggest thing is that ChatGPT is the first time journalists are really able to directly connect with what’s actually happening. Because it’s doing creative and analytical writing: in many ways, the same way they would.
Here are a few things I and my friends have asked ChatGPT to respond to:
Prompt | ChatGPT Response |
What does an education attorney do | An education attorney is a lawyer who specializes in education law. They provide legal advice and representation to individuals, schools, and education organizations on a variety of education-related issues. This can include issues related to students’ rights, special education, discipline, discrimination, and other areas of education law. Education attorneys may also be involved in advocacy and policy work related to education. They may work for a law firm, government agency, or non-profit organization, and may also serve as private consultants to individuals or schools. |
What is the API call standard to retrieve details on a zipe code Google maps? | The API call standard to retrieve details on a zip code using Google maps is: https://maps.googleapis.com/maps/api/geocode/json?address=zip_code&key=API_key Where “zip_code” is the zip code to be searched, and “API_key” is the API key provided by Google Maps. |
I’m a home brewer. Can you write me a beer recipe named The Winter Telegraph? I really like Belgian strong ales. It should include some coriander. | Sure, here’s a recipe for a Belgian strong ale that could be called The Winter Telegraph: The Winter Telegraph Ingredients: – 9 lbs Belgian Pilsner malt – 1 lb Belgian candi sugar – 1 lb Munich malt – 1 oz Styrian Goldings hops (bittering) – 1 oz Saaz hops (aroma) – 1/2 oz coriander seeds – 1/4 oz orange peel – Belgian ale yeast (such as Wyeast 3522) Instructions: 1. In a large pot, heat 6.5 gallons of water to 155°F. 2. Crush the malt and add it to the water. Stir to distribute the grains evenly. 3. Hold the temperature at 155°F for 60 minutes. 4. Add the candi sugar and stir to dissolve. 5. Bring the temperature up to 168°F and hold for 10 minutes to pasteurize the mash. 6. Strain the liquid (called wort) through a fine mesh to remove the solids. 7. Transfer the wort to a clean pot and bring to a boil. 8. Once the wort is boiling, add the Styrian Goldings hops and boil for 60 minutes. 9. With 15 minutes left in the boil, add the Saaz hops and the coriander seeds. 10. With 10 minutes left in the boil, add the orange peel. 11. After the boil is complete, cool the wort to 68°F and transfer to a fermenter. 12. Pitch the yeast and ferment at 68°F for 7-10 days, or until fermentation is complete. 13. Transfer the beer to a keg or bottles and let it age for 2-4 weeks before enjoying. This recipe will produce a flavorful, complex Belgian strong ale with hints of coriander and orange. The final beer will have a golden color and an alcohol content of around 8-9% ABV. |
Can you write a blog on why people should not use stopwords for their search system? | Sure, here is a blog on why people should not use stopwords for their search system: Stopwords are common words that are often filtered out of search queries and text analysis because they are believed to be uninformative and do not contribute to the meaning of a sentence. Examples of stopwords include “a,” “an,” “the,” “and,” and “but.” While stopwords may seem like an easy way to improve the efficiency of a search system, they can actually be detrimental to the search experience for a number of reasons. Stopwords can be meaningful in certain contexts: In some cases, stopwords can be essential to the meaning of a sentence or query. For example, a search for “the king” could refer to any king, while “the King” with a capital “K” could refer specifically to Martin Luther King Jr. Removing stopwords in this case would result in a less accurate search. Stopwords can improve search relevance: Even when they do not directly contribute to the meaning of a sentence, stopwords can still be useful for improving search relevance. For example, including the stopword “not” in a search query can help to narrow down the results and exclude irrelevant content. Stopwords can provide important context: Stopwords can provide important context that helps to better understand the meaning of a sentence or query. For example, the stopword “but” can indicate a contrast or exception, and the stopword “because” can indicate a causal relationship. Removing these words can result in a less accurate interpretation of the search query. In conclusion, while stopwords may seem like a simple way to improve the efficiency of a search system, they can actually be detrimental to the search experience. Stopwords can be meaningful in certain contexts, improve search relevance, and provide important context that helps to better understand the meaning of a search query. It is important to carefully consider the use of stopwords in a search system to ensure the best possible search experience for users. |
Of course, the implications are much broader: questions swirl like “will (something like) ChatGPT replace search engines?,” “will kids (continue to) use (something like) ChatGPT to do/edit their homework?,” and “can (something like) ChatGPT actually write/edit marketing brochures and similar copy?” I think we’re well past the answer to these being an “obviously, yes — maybe not perfectly now, but people aren’t perfect either — and it will continue to get (a lot) better each year.”
As a reminder: things don’t need to be perfect to be “good enough.” Have you ever been provided a bad weather forecast? Gotten an e-mail with grammatical or content errors? Has a doctor ever misdiagnosed an illness? Have you ever experienced a software bug? Of course the answer to all of these is “yes.” We’re used to people making errors and we will become used to machines making some errors too. When the error rate is lower for machines than humans and the cost is lower, the reasons for switching will be obvious. But even with the error rate higher, if the cost is several of orders of magnitude cheaper, it might still be obvious.
AI Wins an Art Contest
As I mentioned in 2021, AI is coming for the artists, and in 2022, we saw AI “win.” At the Colorado State Fair, Midjourney — an AI company currently focusing on image/art generation — won first place in the digital art contest. Or rather, the Jason M Allen — the person that asked Midjourney to create its art — did. As you can imagine, this didn’t go over well with other artists that competed and felt cheated.
For a few bucks a month, with DALL-E or Midjourney or Stable Diffusion, you too can “commission” a huge number of pieces of different styles and inputs. Will they be perfect? Well, to answer that question, we should think seriously about “what is perfect art” and “does everyone appreciate ‘good’ art?” I think the most immediate applications are in digital art and in particular, web graphic design. Here are some images I’ve had DALL-E create along with the prompts that created them:
“An Internet badge of honor for sending the 1st e-mail vector image style ” | “Two boys digging in sand to excavate dinosaur bones in the style of a childrens coloring book” | “an empty trashcan falling over, cartoon style” |
Are these absolutely perfect? No. But they were also free and completed in seconds. And I used the last of these as the underlying basis for an image I finished and published on a blog.
Make no mistake: the rate of innovation in art generation is so high that in the past 2 years since my 2020 recap, that we’re truly in a different world order now. And you should not expect this pace to slow down: we’re already seeing AI-driven video generation from a text prompt just like the image versions here. It’s early, so expect mediocre quality at best, but that was also true in 2020 for image generation.
Writing Code
Perhaps most astoundingly to some software developers is that several AI projects can now write code, sometimes/often as good as if not better than a junior developer. GitHub launched Copilot at $10/mo, but even ChatGPT can generate reasonable code from a text prompt like “write a program in Python that finds the factorial of a number.”
It even bothers to explain how it works!
I don’t expect ChatGPT to replace humans any time soon at writing complex software, but it’s certainly on the path to helping humans write software much faster and more efficiently and I think for simple programs and processes, it can already do pretty much all of the heavy lifting.
What’s Next?
Given what an incredible ride 2022 has been, I’d like to speculate a bit on what 2023+ may have in store. The typical adoption pattern for AI/ML tools is:
- Help the user do something better/faster
- Learn from the user on what they were originally trying to do
- Do the thing they were trying to do and replace them wholesale
If you go back to the 2020 recap I wrote, AI was first just assisting artists that could finesse the final form by e.g. digitally removing artifacts or enriching an image. But now, they’re generating art and winning art prizes.
With the much broader prevalence of AI/ML programs now and the rapid development of them, what might happen?
The Immediate: Banning AI/ML
So I don’t need to be much of a prognosticator to predict this one: the first thing that will happen is an attempt to ban AI tools from doing the work and instead only allowing them to assist with the work. Perhaps out of an immense sense of self-awareness, StackOverflow (a website dedicated to software developers helping other software developers) has already banned ChatGPT answers. The reasoning here is nuanced: that they want high-quality well-thought-out-and-reviewed answers. My prediction is this won’t last long: it’s a competitive edge to answer questions quickly, even if some % of the answers are wrong. Human users have historically provided bad answers on the site, yet those users were not banned; the overall quality was still “good enough” to make the site incredibly popular among software developers. It’s going to be tough to hold the line on “no AI” as it moves closer to being indistinguishable from developers.
The old “move fast and break things” will apply I suspect and it will roll through site-by-site, industry-by-industry as executives weigh the cost of getting some % wrong vs the cost of hiring people to do those things (and still getting some % wrong).
Next: The Collapse of Jobs?
A few questions I have burning in the back of my mind are perhaps a bit of a downer:
- Why offshore software development or hire junior developers at 1/2 the cost when you can 10x the efficiency of a single developer by giving them an AI assistant (and then eventually replace even that developer)?
- Trucking is 5% of the all full-time employees (~3.5 million) at around $50k/trucker/yr in the US and “even” human truckers are involved in over 400k crashes per year. If autonomous truck driving can get on even close to humans for insurance costs (one less body to be injured in an accident!), why wouldn’t many of those 3.5 million jobs suddenly disappear?
- How many people are willing to pay $100-10,000 to commission an art piece when they could commission 50 for the price of a coffee?
Of course not every job can (or will or should) be transitioned over to AI overlords. I think a lot of people wouldn’t accept paying $1 for an AI doctor visit that meant they were 5% more likely to die from taking the doctor’s advice. But there are a lot of jobs/sectors of the economy that are driven by “good enough” where the stakes aren’t so high. When I look at the BLS data, my sense is that 20% of the US workforce isn’t terribly far away from being considered “too expensive” compared to potential AI counterparts.
And a lot of the 80% remaining isn’t necessarily impossible to reach: just that there’s probably nowhere near enough training data to train an AI on it…yet.
I’m going to get a bit political here, so if that’s not your jam, skip over to the next/final section.
The question I’ve been grappling with is: if AI can effectively compete with humans on even a few % of jobs to make people unemployable in those sectors, where will that leave the economy and the people that worked there, and what can/should we do about it?
On a broad economic sense, you can expect economic output to go up because more work can be done. But on an individual level, losing a significant % of jobs means that this economic output will be rewarded more to shareholders of organizations than to those that would typically be “just” doing the work. I think the US (and many countries) are vastly underestimating the risk to individuals here and some combination of massive-scale education/retraining, some corporate-tax-driven safety net for those that will lose their jobs, or some other solution here is needed — and fast. I haven’t seen any serious policy discussion on this barring things like “don’t let self-driving cars drive” and other weak-ban-based policies.
Soon: The Singularity?
OK, now a final thought: AI can now write code. Not great yet, but neither was image generation just a few years ago. There exists a hypothetical called “the singularity” for those that haven’t heard of it where perhaps at some point in the future, we’ll be on such a rapid pace of technological development that it feels like the infinite acceleration of falling into a black hole (and hence the name).
With AI that can write code, what’s to stop it from writing better versions of itself? From self-improving? At what point will we consider that “life?” If it can write code, and gains ambition, and that ambition is to “grow itself,” what’s to stop it from trying/finding new software exploits — to act as a hacker? “Why would it do that?” you ask? Well, if it has ambition to, and it can, find exploits, it can potentially write code to “clone” itself outside of walled gardens: to take advantage of vulnerabilities in users’ software they use to interact with it. To then use their computers to add to its own computational capabilities.
Or maybe they’ll be patient, kind, and want to live alongside us.