Your life on the AGI-pill

TL;DR:

This is what I personally do to prepare for a world with AGI:

  • No long-term commitments
    • Didn’t do a PhD
    • Didn’t buy a house
  • I don’t build an AI vertical (Supplier to AGI > Consumer of AGI)
  • I am moving to the US (but keeping my Swedish ties)
  • I build stuff while I can
    • To have impact
    • To have fun (I don’t learn boring things for later utility)
  • I try to slow aging (being healthy is way more important now than ever before)

Giving advice

My favourite book is Siddhartha by Hermann Hesse, which is about an Indian boy who leaves home to figure out the meaning of life. He tries everything from fasting and meditation to wealth and pleasure, and finally learns wisdom can’t be taught, only lived. My main takeaway is skepticism of a mentality that can result in learned helplessness; there is no silver bullet advice to solve all your problems.

However, I don’t subscribe to the opposite extreme either. I think advice can be useful if you see it as one of many data points and don’t view it as a substitute for learning-by-doing. Advice can put you on a path with fewer mistakes, but you will make plenty of mistakes even on the best path. So, you will still have the opportunity to learn.

Sometimes it can seem as if a single piece of advice made all the difference. For example, after an 80000h call with Alex Lawsen, I immediately shifted my full attention to AI safety, and haven’t looked back since. I am forever grateful for that call, but what I think was really happening was that I already had hundreds of unconscious data points in this direction. It was only a question of time before something made me conscious of it.

It is strange to grow up. Suddenly I find myself on the other side of the advice-question. It is fun, but also scary. I can’t be sure that you put the appropriate weight on my words. I definitely have a negative bias—I’ll feel terrible if my advice causes you pain, but won’t take credit if it leads to success (that’s your achievement, not mine). This incentivizes me to give safe advice, even though, paradoxically, my number one recommendation is always to take more risks (my New Year’s resolution was “take more risks” four years in a row, and it worked out great for me).

I recently found another solution to my negative bias. I recorded an episode on my podcast about how to invest for AGI. Afterwards, my friend criticized the fact that I did not ask the guest what his investments were. “Advice is useless if you don’t know whether the person giving it follows it themselves” he claimed. I think this is great advice on giving advice (I am unsure if my friend gives advice this way). In this post I will try to give career advice indirectly by telling you how I think about the world and how I personally act as a result of that. I think this might be more useful, and I will feel less bad if it causes us both pain.

My world model

I think the most important thing to be aware of when thinking about one’s future is AI. It is impossible to know how it will play out, but one of the most credible predictions I have found predicts that AI is capable of automating most white collar jobs by late 2027. Adoption will not be instant, so let’s say that we have 5 more years as productive members of the economy.

5 years is my median guess, but the distribution has a fat tail. The researchers behind the 2027 prediction put significant probability mass on 2036 or later, and I do too. However, I think it is better to plan for shorter timelines and be wrong than the other way around. Most things I do are still okay in the 2036 world, but the things I would do for the 2036 world would be quite bad in the 2027 world (this might not be true for others though).

The best career decisions put you on an exponential growth path. On an exponential path, your growth starts slow but speeds up dramatically later. At first, starting just one year later doesn’t seem like a big deal, because early on there’s almost no visible difference between you and someone who started sooner. But the real cost isn’t the small difference at the start - it’s losing out on the big gains at the end, where exponential growth truly kicks in.

What I do

The first decision I ever made as a consequence of my worldview was not to pursue a PhD. With the world changing so rapidly, it seemed impossible to select a topic that would still be relevant five years later. When I began my undergraduate studies, computer science was undoubtedly considered one of the best choices if you wanted a successful career. Now, six years later, the outlook for junior developers appears quite grim.

Similarly, I decided not to buy an apartment. When the world changes this quickly, being tied to one location isn’t ideal.

I prioritize creating over learning. I’ve done many boring things in my life, often to learn something that might become useful in the future. However, this only makes sense if you expect a future in which that knowledge pays off over a long period. As a result, I now do this much less. (I’ve also empirically found that things which were boring to learn rarely turned out to be useful.)

I’m having fun. Another reason not to learn boring things is that it’s hard to know if they’ll actually be useful. When the future is uncertain, the value of trying to predict it diminishes, and optimizing for the short term makes more sense. For me, this means following curiosity. At my startup, Andon Labs, following curiosity is how we make decisions. I don’t think there’s a single person on planet Earth who’s having more fun than I am (except maybe my co-founder, Axel).

I am moving to the US (but keeping my Swedish ties). Growing up, I never thought I would live anywhere else for extended periods. Sweden consistently tops charts for quality of life, and we punch above our weight in technology and creative output. We were never going to “win” against the US in these areas, but as long as we remained their friends, I didn’t think it would matter. However, two things have changed my view on this. 1) Political tensions between the US and Europe have made this friendship much less certain. 2) With AI, the advantages of “winning” might be significantly greater than I previously thought. On the other hand, if we end up with dangerously misaligned AI, living in the US might not be ideal. Having both options could therefore be very valuable.

I (try to) create things that will last. So far in this post, my underlying principle has been that the world will move quickly, making it harder to predict. But that doesn’t mean we shouldn’t try. I recently recorded a podcast with Mark Cummings where we discussed how one might invest their money if superintelligent AI becomes a reality. In short, investing in things the AI itself will need seems like a good bet. The majority of my money is in companies like Nvidia and their suppliers (would be in the AI labs if their shares were publicly traded). I’m not convinced this is the best expected-value bet though. Throughout history, putting your money into index funds has consistently proven to be the best choice for most people. Instead, I see it as a form of insurance. I’m confident I’ll have enough to live a happy life in all “normal” future scenarios. It’s only in extreme AGI scenarios that this might not hold true. For example, the wealth gains from AGI might concentrate among a select few, significantly diminishing my relative wealth to a point where I am struggling. Thus, holding some stocks that benefit from AGI might serve as insurance against worst-case outcomes.

However, far more important is spending my time on the right things. The small amount of money I have invested in stocks is insignificant compared to how much I value my time. I spend almost all of my time on my startup. Startups are growing faster than ever right now, and the hottest sector is AI verticals. I’ve written extensively before about why I think focusing on AI verticals is a terrible idea. It’s only a matter of time before horizontal AI solutions render vertical AI obsolete. Instead, I take an approach similar to my investment strategy: I create things that AI itself will need.

Finally, and perhaps most importantly, I try to take care of my body. Dario Amodei predicts that AI will enable 100 years’ worth of biological research within the decade from 2030 to 2040. Given this, I think it’s quite plausible that we will have stopped (or significantly reduced) aging by then. We might even be able to reverse it, but if not, the body you have in 2035 could be the body you’ll have for the rest of your (very long) life. Therefore, I sleep 8–9 hours per day, eat only healthy foods, exercise regularly, and meditate daily. These practices have always been important, but now they might matter more than ever.

I am lucky, and you are too

I feel very lucky to be my current age (26) during this particular moment in history. I’m old enough to have experienced a time when knowing how to code was a superpower possessed by only a few. I’m old enough to have grown up without a phone addiction. I’m old enough to have enjoyed the bliss of university life (with the illusion that it was a productive use of my time). I’m old enough to have gained enough experience to become a productive member of the economy. But most importantly, I’m young enough to still have the hunger to make an impact during these final years when humans alone are capable of shaping the world.

This should not discourage anyone who is at a different age; anyone can craft a story about why they’re alive at the perfect time.

Follow me on X or subscribe via RSS or Substack to stay updated.




Enjoy Reading This Article?

Here are some more articles you might like to read next:

  • The Same Heaven
  • Linguistic Imperialism in AI - Enforcing Human-Readable Chain-of-Thought
  • Mother's Day Gift Recommendations
  • AI Founder's Bitter Lesson. Chapter 4 - You’re a wizard Harry
  • AI Founder's Bitter Lesson. Chapter 3 - A Footnote in History