Daniel Miessler's Blog, page 7

August 27, 2024

UL NO. 447: Sam Curry on Bug Bounty Careers, Slack Data Exfil, The Work Lie

.bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; } .bh__table_cell { padding: 5px; background-color: #FFFFFF; } .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; } .bh__table_header { padding: 5px; background-color:#F1F1F1; } .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }

SECURITY | AI | MEANING :: Unsupervised Learning is a stream of original ideas, story analysis, tooling, and mental models designed to help humans lead successful and meaningful lives in a world full of AI .

Continue reading online to avoid the email cutoff… TOC

NOTES

MY WORK

SECURITY

AI / TECH

HUMANS

IDEAS

DISCOVERY

RECOMMENDATION OF THE WEEK

APHORISM OF THE WEEK

NOTES

Ok, tons of content this week—super excited for this episode!

Going all-text this time—callback to old-school

Upcoming Speaking: Snyk’s conference in October, Cyberstorm in Switzerland in October, BlackHat in Rihyad in November

The one AI tool you should be trying out from the last couple of weeks is CursorAI. Lots of people are switching to it from Copilot. The big feature seems to be an editor that understands your full codebase.

Ok, let’s go…

MY WORK

My new essay on why layoffs, hiring, the job market, and work in general just sucks right now. One of my top 20 essays ever. READ IT

The new way I explain AI—and specifically LLMs—to people. READ IT

SECURITY

CrowdStrike's 2024 Threat Hunting Report reveals that North Korean operatives, posing as job applicants, have infiltrated over 100 U.S.-based companies in sectors like aerospace, defense, retail, and tech. Not much coverage of Blue Friday. MORE

State-linked Chinese entities are using cloud services from Amazon and its rivals to access advanced U.S. chips and AI capabilities they can't get otherwise. MORE

Cisco has patched multiple vulnerabilities, including a high-severity bug (CVE-2024-20375) in its Unified Communications Manager products. This flaw, reported by the NSA, affects SIP call processing and can be exploited remotely to cause a denial-of-service condition. MORE

Sponsor

Is Foreign Software Running in Your Environment?  

Shadow I.T., foreign software, and even unpatched vulnerabilities could be lurking in your corporate mandated devices. To resolve this, ThreatLocker® is offering free I.T. security health reports to organizations looking to harden their environment and mitigate the risks of potential nation-state attacks, all on a single pane of glass.

ThreatLocker’s free report audits what is occurring in your environment, including:

Information about executables, scripts, and libraries.

Files that have been accessed, changed, or deleted.

All network activity, including source and destination IP addresses, port numbers, users, and processes.

Identify and prevent installed software from communicating with entities in Russia, China, or other threat actors.

threatlocker.com/pages/software-audit

Get Your Free Software Report

Two U.S. lawmakers are urging the Commerce Department to investigate cybersecurity risks associated with TP-Link routers, citing vulnerabilities and potential data sharing with the Chinese government. MORE

Quarkslab found a major backdoor in RFID cards made by Shanghai Fudan Microelectronics, one of China's top chip manufacturers. This backdoor allows for the instant cloning of contactless smart cards used globally to open office doors and hotel rooms. MORE

The AI Risk Repository now lists over 700 potential risks that advanced AI systems could pose, making it the most comprehensive source for understanding AI-related issues. MORE

Sponsor

13 Cybersecurity Tools. One Platform. Built for IT Teams

There are thousands of cybersecurity point solutions. Many of them are good—but managing more than a dozen tools, disparate reports, invoices, trainings, etc. is challenging for small IT teams.

We’ve built a platform that does assessments, testing, awareness training, and 24/7/365 managed security all in a single pane of glass. Because every company deserves robust cybersecurity.

 defendify.com

Book A Demo

Researchers found a way to exfiltrate data from Slack's AI by using indirect prompt injection. MORE

The U.S. Navy is rolling out Starlink on its warships to provide high-speed, reliable internet connections, significantly improving operational capabilities and crew morale. MORE

Continue reading online to avoid the email cutoff… AI / TECH

Anthropic has published the system prompts for its latest AI models, including Claude 3 Opus, Claude 3.5 Sonnet, and Claude 3.5 Haiku. MORE

AGIBOT—a Chinese company—just unveiled a fleet of five advanced humanoid robots to compete directly with Tesla’s Optimus bot. These models, including the flagship Yuanzheng A2, are designed for tasks ranging from household chores to industrial operations and will start shipping by the end of 2024. I’ll be waiting for an American option. MORE

💡I am anti-Chinese-imports for both robotaxis and humanoid robots. The market is too big, China moves too fast, and we need to give American companies (Elon) time to compete.

I don’t like this take. I don’t like slowing pressure from the outside, and if it were India, or Ireland I’d be ok with applying that pressure. But not China. They’re too obviously a malicious actor to allow them to dominate these new markets.

Speaking of that, Tesla is hiring people to train its Optimus humanoid robot by wearing motion capture suits and mimicking actions it will perform. The job, listed as “Data Collection Operator,” pays up to $48 per hour and involves walking for over seven hours a day while carrying up to 30 pounds and wearing a VR headset. MORE

Waymo is looking to launch a subscription service called "Waymo Teen" that would allow teenagers to hail robotaxis solo, with prices ranging from $150 to $250 per month for up to 16 rides. MORE

An AI scientist developed by the University of British Columbia, Oxford, and Sakana AI is creating its own machine learning experiments and running them autonomously. This is where most innovation will come from AI. Not just in implementing tasks, but in doing new research. I talked about it here. MORE

Victor Miller, a mayoral candidate in Wyoming’s capital city, has vowed to let his customized ChatGPT named Vic (Virtual Integrated Citizen) help run the local government if elected. MORE

💡I’m working on how to articulate a political platform for any level of office using Substrate.

You basically define exactly what you want to do, and it branches out with all the Problems, Strategies, KPIs, etc., all in a single platform file that people’s AIs can evaluate and compare to their own beliefs and goals.

I think this is where leadership is heading. Transparent descriptions of vision, strategy, and outcome measurement.

Sean Ammirati, a professor at Carnegie Mellon, noticed a massive up-leveling of progress in his entrepreneurship class this year thanks to generative AI tools like ChatGPT, GitHub Copilot, and FlowiseAI. Students used these tools for marketing, coding, product development, and recruiting early customers, resulting in venture capitalists flocking to the campus. MORE

💡This is what I’ve been talking about with AI Augmentation. If you were competing with a 95/100 person before, because they went to CMU—well, now you’re competing with a 130/100 because they went to CMU AND they use AI for everything.

I read better articles because of AI

Therefore I get better ideas because of AI

Therefore I build better stuff because of AI

Etc.

And I do this all faster than was possible before

Upgrade or lose. Those are your options.

GM is cutting over 1,000 software engineers to streamline its software and services organization. Streamlining by cutting out 1,000 devs? The way I read this is “Start from scratch and only hire A’s from now on.” See: all of my other posts about companies only wanting Killer Cult Members from now on. MORE

Meta is using AI to streamline system reliability investigations with a new root cause analysis system. This system combines heuristic-based retrieval and large language model (LLM)-based ranking, achieving 42% accuracy in identifying root causes at the investigation's start. MORE

AI companies are shifting focus from creating god-like AI to building practical products. Gasp! This isn’t a bubble-pop; it’s just natural maturity of a thing that came out 13 minutes ago. People are still figuring this stuff out, and it’s still day 1 in terms of AI capabilities. MORE

Canada is slapping a 100% import tariff on China-made electric vehicles starting October 1, following similar moves by the US and EU. MORE

Former Google CEO Eric Schmidt predicts rapid advancements in AI, with the potential to create significant apps like TikTok competitors in minutes within the next few years. MORE

Anthropic Claude 3.5 can now create iCalendar files from images, and Greg's Ramblings shows how you can use this feature to generate calendar entries just by snapping a photo of a schedule or event flyer. MORE

AWS CEO Adam Selipsky predicts that within the next 24 months, most developers might not be coding anymore due to AI advancements. He emphasizes that the real skill will shift towards innovation and understanding customer needs rather than writing code. MORE

Chinese companies have ramped up their imports of chip production equipment, spending nearly $26 billion in the first seven months of the year. They need to equip 18 new fabs expected to start operations in 2024 and are seriously worried about export controls. MORE

HUMANS

Cisco is laying off 7% of its workforce, which is around 5,900 employees, as it pivots towards AI and cybersecurity. The company is investing $1 billion in tech startups like Cohere, Mistral, and Scale, and has partnered with Nvidia to develop AI infrastructure. MORE

McKinsey's new study reveals that business leaders are missing the mark on why employees are quitting. They say companies are focusing on transactional perks like compensation and flexibility, but employees are actually seeking meaning, belonging, holistic care, and appreciation at work. Couldn’t have been better timed with this week’s Work essay. MORE

Twenty-four brain samples collected in early 2024 measured on average about 0.5% plastic by weight. MORE

Gallup has released its 2023 Global Emotions report, which measures the world's emotional temperature through the Positive Experience Index and Negative Experience Index. The data comes from surveys conducted in 142 countries, using a mix of telephone, face-to-face, and some web surveys, with about 1,000 respondents per country. MORE

💡Exceedingly cool research and data and visualizations! MORE

Nonsmokers who avoided the sun had a life expectancy similar to smokers who got the most sun, according to a study of nearly 30,000 Swedish women over 20 years. The research suggests that avoiding the sun is as risky as smoking. This is the type of thing that needs way more research, but damn. More sun for me, regardless. It’s a massive boost for me in the morning. MORE

Stanford researchers have found that blocking the kynurenine pathway in the brain can reverse the metabolic disruptions caused by Alzheimer’s disease, improving cognitive functions in mice. I’m starting to feel like we’re about to make massive progress on both Alzheimer’s and Cancer, and it’s making me want to invest in 2-3 of the top drug companies. MORE

Using air purifiers in two Helsinki daycare centers reduced kids' sick days by about 30%, according to preliminary findings from the E3 Pandemic Response study. The research, led by Enni Sanmark from HUS Helsinki University Hospital, aims to see if air purification can also cut down on stomach ailments. MORE

University of Missouri scientists have developed a liquid-based solution that removes over 98% of nanoplastics from water. It uses natural, water-repelling solvents to absorb plastic particles, which can then be easily separated and removed. I expect to see a lot of similar products soon. I feel like microplastics might be the new health scare. Not sure if that’s justified or not. Can’t wait for the Huberman episode. MORE

Eli Lilly's weight loss drug tirzepatide, found in Zepbound and Mounjaro, reduced the risk of developing Type 2 diabetes by 94% in obese or overweight adults with prediabetes, according to a long-term study. Dayum. 94%. MORE

Apple Podcasts is losing ground to YouTube and Spotify, with a recent study showing YouTube now leads in podcast consumption at 31%, followed by Spotify at 21%, and Apple Podcasts trailing at 12%. MORE

IDEAS


Damn, just thought of a super cool use case for Fabric +  Telos + Substrate.


1. Maintain a list of everything I've been REALLY wrong about. (Already working on this list)


2. Write a Fabric pattern that looks at that list and identifies key ways that I miss.


3. Recommend.


— ᴅᴀɴɪᴇʟ ᴍɪᴇssʟᴇʀ ⚙️ (@DanielMiessler)
8:39 PM • Aug 22, 2024


DISCOVERY

ffufai uses ffuf and AI to find more web hacking targets, by Joseph Thacker. MORE

gofuzz.py recursively looks at JavaScript files and finds endpoints that can be tested. MORE

analyze_interviewer_techniques is a new Fabric pattern that will capture the ‘je ne se quoi’ of a given interviewer. I’ve been using it on Dwarkesh and Tyler Cowen. MORE

harness is a quick tool I put together to test the efficacy of one prompt vs. another. It runs both against an input and then scores the output using a third, objective prompt that rates how well they followed the plot. MORE

State and time are the same thing — Hillel Wayne explores the concept that state and time are interchangeable. MORE

Don’t force yourself to become a bug bounty hunter, by Sam Curry. MORE

67 years of old Radio Shack catalogs have been scanned and are now available online. MORE

mdrss is a Go-based tool that converts markdown files to RSS feeds. You can write articles in a local folder, and it automatically formats them into an RSS-compliant XML file, handling publication dates and categories. MORE

No "Hello", No "Quick Call", and No Meetings Without an Agenda — This blog post highlights common remote work mistakes like starting conversations with "Hi" and waiting for a response, asking for "quick calls" without context, and scheduling meetings without agendas. 😡💪 MORE

Roger Penrose's book "The Emperor's New Mind" explores the relationship between the human mind and computers, arguing that human consciousness cannot be replicated by machines. MORE

A Collection of Free Public APIs That Are Tested Daily MORE

RECOMMENDATION OF THE WEEK

Take the time to read this week’s main essay—We’ve Been Lied To About Work.

But more than just reading it, think about what it means if I’m right. Think about what that means for you and your career, but also all the young people you know and care about.

I didn’t talk about it in that piece, but the solution is the transition to a Human 3.0 mindset, which—in this context—means taking the same skills that you’re good at and that you do for someone else, and doing that for yourself.

More help is coming from me on how exactly to do that, but start thinking about it now.

APHORISM OF THE WEEK
Become a Member to Augment yourself
Powered by beehiiv
 •  0 comments  •  flag
Share on Twitter
Published on August 27, 2024 17:02

August 26, 2024

World Model + Next Token Prediction = Answer Prediction

.bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; } .bh__table_cell { padding: 5px; background-color: #FFFFFF; } .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; } .bh__table_header { padding: 5px; background-color:#F1F1F1; } .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }Table of Contents

A new way to explain LLM-based AI

The 5 Levels of LLM Understanding

Answers as descriptions of the world

Human vs. absolute omniscience

The argument in deductive form

Guess what? We do it too…

Summary

A new way to explain LLM-based AI

Thanks to Eliezer Yudkowsky, I just found my new favorite way to explain LLMs—and why they’re so strange and extraordinary.

Here’s the post that sent me down this path.



"it just predicts the next token" literally any well-posed problem is isomorphic to 'predict the next token of the answer' and literally anyone with a grasp of undergraduate compsci is supposed to see that without being told.


— Eliezer Yudkowsky ⏹️ (@ESYudkowsky)
2:17 PM • Aug 23, 2024


And here’s the bit that got me…

well-posed problem = prediction of next token of answer

Like—I knew that. And I have been explaining the power of LLMs similarly for over two years now. But it never occurred to me to explain it in this way. I absolutely love it.

Typically, when you’re trying to explain how LLMs can be so powerful, the narrative you’ll get from most is…


There’s no magic in LLMs. Ultimately, it’s nothing but next token prediction.


(victory pose)

A standard AI skeptic argument

The problem with this argument—which Eliezer points out so beautifully—is that—with an adequate understanding of the world—there’s not much daylight between next token prediction and answer prediction.

So, here’s my new way of responding to the “just token prediction” argument, using 5 levels of jargon removal.

The 5 Levels of LLM Understanding

TIER 1: “LLMs just predict the next token in text.”

TIER 2: “LLMs just predict next tokens.”

TIER 3: “LLMs predict the next part of answers.”

TIER 4: “LLMs provide answers to really hard questions.”

TIER 5: “HOLY CRAP IT KNOWS EVERYTHING.”

That resonates with me, but here’s another way to think about it.

Answers as descriptions of the world

If you understand the world well enough to predict the next token of an answer, that means you have answers. 🤔

Or:

The better an LLM understands reality and can describe that reality in text, the more “predicting the next token” becomes knowing the answer to everything.

But “everything” is a lot, and we’re obviously never going to hit that (see infinity and the limits of math/physics, etc.).

So the question is: What’s a “good enough” model of the universe—for a human context—to be effectively everything?

Human vs. absolute omniscience

If you’re tracking with me, here’s where we’re at—as a deductive argument1 .

If you have a perfect model of the universe, and you can predict the next token of an answer about anything in that universe, then you know everything.

But we don’t have a perfect model of the universe.

Therefore—no AI (or any other known system) can know everything.

100% agreed.

But the human standard for knowing everything isn’t actually knowing everything . The bar is much lower than that.

The human standard isn’t:

Give me the location of every molecule in the universe

Predict the exact number of raindrops that will hit my porch when it rains next

Predict the exact price of NVIDIA stock at 3:14 PM EST on October 12, 2029.

Tell me how many alien species in the universe have more than a 100 IQ equivalent.

These are—as far as we know of physics—completely impossible to know because of the limits of the physical, atom-based, math-based world. So we can’t ever know “everything”, or really anything close to it.

But take that off the table. It’s impossible, and it’s not what we’re talking about.

What I’m talking about is human things. Things like:

What makes a good society?

Is this policy likely to increase or decrease suffering in the world?

How did this law effect the outcomes of the people it was supposed to help?

These questions are big. They’re huge. But there’s an “everything” version of answering them (which we’ve already established is impossible), and then there’s the “good enough” version of answering them—at a human level.

I believe LLM-based AI will soon have an adequately deep understanding of the things that matter to humans—such as science, physics, materials, biology, laws, policies, therapy, human psychology, crime rates, survey data, etc.—that we will be able to answer many of our most significant human questions.

Things like:

What causes aging and how do we prevent or treat it?

What causes cancer and how do we prevent or treat it?

What is the ideal structure of government for this particular town, city, country, and what steps should we take to implement it?

For this given organization, how can they best maximize their effectiveness?

For this given family, what steps should they take to maximize the chances of their kids growing up as happy, healthy, and productive members of society?

How does one pursue meaning in their life?

Those are big questions—and they do require a ton of knowledge and a very complex model of the universe—but I think they’re tractable. They’re nowhere near “everything”, and thus don’t require anywhere near a full model of the universe.

In other words, the bar for practical, human-level “omniscience” may be remarkably low, and I believe LLMs are very much on the path to getting there.

The argument in deductive form

Here’s the deductive form of this argument.

If you have a perfect model of the universe, and you can predict the next token of an answer about anything in that universe, then you know everything.

But we don’t have a perfect model of the universe.

Therefore—no AI (or any other known system) can know everything.

However, the human standard for “everything” or “practical omniscience” is nowhere near this impossible standard.

Many of the most important questions to humans that have traditionally been associated with something being “godlike”, e.g., how to run a society, how to pursue meaning in the universe, etc., can be answered sufficiently well using AI world models that we can actually build.

Therefore, humans may soon be able to build “practically omniscient” AI for most of the types of problems we care about as a species.2

🤯

Guess what? We do it too…

Finally there’s another point that’s worth mentioning here, which is that every scientific indication we have points to humans being word predictors too.

Try this experiment right now, in your head: Think of your favorite 10 restaurants.

As you start forming that list in your brain—watch the stuff that starts coming back. Think about the fact that you’re receiving a flash of words, thoughts, images, memories, etc., from the black box of your memory and experience.

Notice that if you did that same exercise two hours from now—or two days from now—the flashes and thoughts you’d have would be different, and you might even come up with a different list, or put the list in a different order.

Meanwhile, the way this works is even less understood than LLMs! At some level, we too, are “just” next tokens predictors, and it doesn’t make us any less interesting or wonderful.

Summary

It all starts with the sentence “next tokens prediction is isomorphic with answer prediction”.

This means “next token prediction” is actually an extraordinary capability—more like splitting the atom than a parlor trick.

Humans seem to be doing something very similar, and you can watch it happening in real-time if you pay attention to your own thoughts or speech.

But the quality of a token predictor comes down to the complexity/quality of the model of the universe it’s based on.

And because we can never have a perfect AI model of the universe, we can never have truly omniscient AI.

Fortunately, to be considered “godlike” to humans—we don’t need a perfect model of the universe. We only need enough model complexity to be able to answer the questions that matter most to us.

We might be getting close.

1  A deductive argument is where you must accept the conclusion if you accept the premises above, e.g., 1) All rocks lack a heartbeat, 2) This is a rock. 3) Therefore, this lacks a heartbeat.2

2  Thanks to Jai Patel, Joseph Thacker, and Gabriel Bernadette-Shapiro for talking through, shaping, and contributing to this piece. They’re my go-to friends for discussing lots of AI topics—but especially deeper stuff like this.


Powered by beehiiv
 •  0 comments  •  flag
Share on Twitter
Published on August 26, 2024 15:46

The End of Work

.bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; } .bh__table_cell { padding: 5px; background-color: #FFFFFF; } .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; } .bh__table_header { padding: 5px; background-color:#F1F1F1; } .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }Table of Contents

The feeling

The symptoms

What I think is actually happening

3 reasons for hope

1. Those jobs sucked anyway

2. Even fast things go slow

3. What comes after will be much better

Summary

The feeling

If you’re like me, you’ve had this strange, uneasy feeling about the job market1 for a few years now.

It’s like a splinter in the brain. We know something is deeply broken about the whole system, but it’s impossible to grasp or articulate.

I’m writing this because I think I figured it out.

People talk a lot about how AI is going to replace millions of jobs, and how it will also create many more. I think that’s true, but I think there’s a lot more going on here than just AI.

The symptoms

First, why do we even think there’s a problem? For me, it starts with noticing how bad—and how often—work just completely sucks. Like the whole thing—finding work, doing work, being stressed about not losing work. Etc.2  

Most companies, departments, and teams are horribly inefficient, have very little direction, are full of wasted time and efficiency, and are poorly run. It’s constant meetings to talk about the latest corporate fuckery, which only prevents you from doing what you should be doing. It’s change for the sake of change. And when you get excited about a way to fix things, either nobody listens, or they only pretend to before failing to implement it.

Work tends to be a series of disappointments for most people, punctuated by a few rays of light. I was curious about how many people felt this way—not wanting to do a full essay on just my own opinions—and found this from Gallup about the quiet quitting phenomenon.

Gallup: Is Quiet Quitting Real?, 2023


The overall decline was especially related to clarity of expectations, opportunities to learn and grow, feeling cared about, and a connection to the organization's mission or purpose—signaling a growing disconnect between employees and their employers.


Many quiet quitters fit Gallup's definition of being "not engaged" at work—people who do the minimum required and are psychologically detached from their job. This describes half of the U.S. workforce.

Is Quiet Quitting Real (Gallup)

I’m sure there are lots of factors going into Quiet Quitting, but I think this feeling I’m talking about is one of them.

What I think is actually happening

And that brings me to what I think the real issues are, which are a lot deeper and more unsettling than just, “AI is taking the jobs.”

The ideal number of employees in any company is zero. If a company could run and make money using no people, then that is exactly what it should do. We never think about this or talk about this because it’s very strange and uncomfortable, but it is true. The purpose of companies is not to employ people, it is to provide a product or service in return for money,

Because of that, there is a constant downward pressure on anybody who is employed. It is not a specific pressure from a specific person or department. It is simply a fact of business reality that manifests itself in various ways throughout an organization over time. We have to stop thinking of this as a malicious thing where we should be employed, and they are trying to get rid of us. The truth is exactly the opposite.

Nobody owes anybody a job. The only reason anyone has one is because there was a problem at some point in that business that required a human to do some part of the work. Building on that, if that ever becomes not the case, for a particular person or team or department of human employees, the natural next action is to get rid of them. Again, not because business owners or managers are bad people or anything like that. We need to stop injecting morality into this. Businesses simply should have as few employees and actually any expense as possible.

A good way to think about this is to look at a list of software products your company pays for. Let’s say your company pays for 215 software products that cost us $420,000 a year to own and use. Nobody would object to somebody looking at that list of software finding redundancies and canceling those licenses. That is simply work that is being done by other software, or is not required anymore, so it would be stupid for the business to not cancel or failed to renew those licenses.

It is exactly the same with humans, and no matter what you read or hear, I believe this is the main reason we are seeing disruption in job markets today. I think more and more businesses are seeing themselves as money in and money out, and are seeing human workers as being very expensive and generally not very good at what they do. This is not necessarily because of individual workers but because human organizations and communication are so inefficient and wasteful. So basically, companies are realizing that they are spending millions—or hundreds of millions—of dollars per year on human talent, and they’re realizing it’s not worth it.

So that is 2 pieces: 1) ZERO is the optimum number of employees for any company, and 2) companies are realizing that they’re paying way too much for giant workforces that are not producing near the value being paid. Forgetting any modern technological innovation, these two things combined produce extreme downward pressure on the workforce. It adds pressure, stress, drama, and all sorts of negativity to the practice of finding a job, getting a job, keeping a job, working with coworkers, going through organizational changes, and everything else that goes with being a regular employee. It just basically fucking sucks. And the reason it sucks is because companies ultimately wish that you didn’t exist in the first place. We have forgotten that—or never learned it—and that needs to change.

Now let’s add AI, which if you’ve read any of the stuff I’ve been writing, you know is— in the context of business —a technology for replacing the human intellectual work tasks that make up someone’s job.

Here’s a good example from a recent piece on this topic.

From ‘You’ve Been Thinking About AI All Wrong’

What this example shows is a workflow that a human worker does today—just like millions of similar workflows—but that AI will soon be able to do instead.

It’s just steps, like we can see further down in the piece.

From ‘You’ve Been Thinking About AI All Wrong’

So now we have 3 pieces. The ideal number of employees is zero, companies are extremely unhappy with their current workforces, and just now—starting in 2023 and 2024—it is actually becoming possible to replace human intelligence tasks with technology.

You have to see where this is going. It is not moving towards a few jobs get removed, and a few jobs get added. It’s not moving towards some gentle shifts in workforce dynamics or euphemisms like that. No, we are talking about fundamental change. Now for the main point of this piece.

The entire concept of work that we have had for thousands of years was a temporary model that was required to solve a temporary problem. Namely, people who are trying to build or sell something that required work they were unable to do by themselves. 

Read that again.

The only reason anybody has a job is because some people are builders and creators, and they cannot do the entire job themselves.

That work—which is required to produce those products and services—is the reason people have 9-5 jobs. This is the reason the entire economy works the way it does. Those builders/creators then hire people, who they have to pay, and those people spend that money on things in the economy. And that is the system we are all used to.

Well…

This system goes away when builders and creators can make things by themselves. Which is precisely what AI is about to enable.

So, here’s where we are.

The ideal number of employees for a company is zero.

The reason companies had employees in the past is ONLY because the founders couldn’t deliver their product/service without human workers.

Companies and society has sort of forgotten this over the past decades and it’s been kind of assumed that all companies should have these large workforces, because it’s the job of companies to provide good jobs to society.

This hasn’t been working for companies, and company leaders are now noticing that they’re not getting near the value they should be from most employees and teams.

So there’s already this realization sinking in, and then we are getting AI at the exact same time.

This means at the exact time that company leaders are looking very skeptically at their human resources spending, they’re being presented with an alternative.

OK, so maybe you’re thinking:


Holy crap—he’s right.


This is a horrible problem, and we are all screwed. What do I even do?


Yes, and no. I have three things to offer here that should make you feel somewhat better.

3 reasons for hope

But it’s not all bad. I have 3 reasons for optimism.

1. Those jobs sucked anyway

How many people do you know who work regular 9 to 5 jobs in a knowledge work environment who look forward to Monday? How many people, if you really stood back and looked at your life, think it’s good to spend most of your waking moments getting ready to work, dealing with dumbass work shit, all fucking day, and then trying to destress from that day, just so you can actually enjoy the few hours you have left to live your actual life?

All that so you can hopefully make it to Friday so you can have two days where you hopefully don’t have to think about that hellscape you call work.

Is that the way humans should live on their home planet? If advanced and benign aliens came and visited, and interviewed us, would they not see that as a primitive state of being? Of course they would.

Bullshit Jobs, by David Graeber

The thing that we are about to lose is not something we should cry over. We should be worried cause losing these jobs will be massively disruptive, and it’s stressful as hell to think about a completely changed future. But these Bullshit Jobs themselves are not something to cherish and remember.

2. Even fast things go slow

This transition will simultaneously happen very fast, but also pretty slowly. Even if there is advanced AGI in 2025 (which would be very fast), it still takes time for new technology to enter into companies and fully replace previous technology or humans.

So it will take a while, and that’s not even taking into account likely legislation that will slow it down even further based on how disruptive it is. So it’s not like half of the workforce will suddenly not have a job in 2026. It will be very fast, but not that fast.

3. What comes after will be much better

And finally—and best of all—what we will be left with afterwards, assuming we survive, will be a much better way to live.

That same AI that took our dumbass jobs away also has the potential to produce extraordinary abundance for humanity, freeing us up to use our days being human rather than bad biological precursors to AI workers.

We weren’t supposed to be moving paperwork, and sorting spreadsheets, and sending meeting invites, and writing computer code. It’s not what we were supposed to be doing.

What we are supposed to be doing is building and creating things for each other. Things that make each other‘s lives better, and richer, and more meaningful.

And that is exactly what we will do on the other side of all of this. I obviously don’t know our chances of making it to this other side, or if/when it happens, exactly how that will play out. That is impossible to know, but what I can tell you is that I am all in on that future, because it doesn’t make sense to me to live any other way.

Sure, the disruption might tear us apart and send us back to the Bronze Age. That’s possible too, but I choose to believe that we will make it out of this. We’ll get out by getting through. And we’ll emerge on the other side better for it.

Summary

The primary reason we’re seeing all this disruption in the job market is because we’ve been part of a mass delusion about the very nature of work.

We told ourselves that millions of corporate workforce jobs—that pay good salaries, have good benefits, and allow you to save for retirement—were somehow a natural feature of the universe.

In fact, that entire paradigm was just a temporary feature of our civilization, caused by builders and creators not being able to do the work required by themselves. And that’s going away.

But it’s ok.

Most of the jobs sucked anyway, and they took up most of the daily waking hours we were supposed to be spending with family and friends.

Plus even if this transition happens really fast, it still won’t be overnight. Big things take a while.

And most importantly—what waits for us on the other side is a better way to live. A more human way to live—where we identify as individuals rather than corporate workers and exchange value and meaning as part of a new human-centered economy.

My purpose in writing this is to give an alternative—and hopefully far more satisfying—explanation of the feelings you might’ve been feeling for a very long time. And to give you both some warning—and some hope—with which to move forward.

I’ve oriented my life—since the end of 2022—around thinking about this problem, providing ideas and frameworks around it, and have written hundreds of articles about the problem and how to prepare for it. But rather than give the standard “subscribe to my newsletter” response, I would just say that I’m easy to find.

Connect here and we can continue the conversation. Website | X | Newsletter | Community | LinkedIn

We are going to get through this, and it will be much better once we do.

🫶

1  I’m talking about the knowledge work job market, like IT, etc., not physical or professional work, although I do think they’ll be affected soon as well.

2  I’m specifically speaking of the last few years, say, since 2020.


Powered by beehiiv
 •  0 comments  •  flag
Share on Twitter
Published on August 26, 2024 12:08

We've Been Lied To About Work

.bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; } .bh__table_cell { padding: 5px; background-color: #FFFFFF; } .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; } .bh__table_header { padding: 5px; background-color:#F1F1F1; } .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }Table of Contents

The feeling

The symptoms

What I think is actually happening

Three reasons for hope

1. Those jobs sucked anyway

2. Even fast things go slow

3. What comes after will be much better

Summary

The feeling

If you’re like me, you’ve had this strange, uneasy feeling about the job market1 for a few years now.

The feeling is like a splinter in the brain—like something is deeply broken about the whole system, but I couldn’t grasp it or articulate it.

I’m writing this because I think I figured it out.

People talk a lot about how AI is going to replace millions of jobs, and how it will also create many more. I think that’s true, but I think there’s a lot more going on here than just AI. I think AI is an accelerant to all of this, but not the main issue.

I think the main factor is so elusive and depressing that it’s hard to even talk about, which is why we don’t.

The symptoms

First, the symptoms. It starts with noticing how bad—and how often—work just completely sucks. Like the whole thing. Finding work. Doing work. Being stressed about not losing work. Etc.2  

Most companies, departments, and teams are horribly inefficient, have very little direction, are full of wasted time and efficiency, and are poorly run. It’s constant meetings to talk about the latest corporate fuckery, which only prevents you from doing what you should be doing. It’s change for the sake of change. And when you get excited about a way to fix things, either nobody listens, or they only pretend to before failing to implement it.

So work tends to be a series of disappointments for most people, punctuated by a few rays of light.

I was curious about how many people felt this way—not wanting to do a full essay on just my own opinions, and found this from Gallup about the quiet quitting phenomenon.

Gallup: Is Quiet Quitting Real?, 2023


The overall decline was especially related to clarity of expectations, opportunities to learn and grow, feeling cared about, and a connection to the organization's mission or purpose—signaling a growing disconnect between employees and their employers.


Many quiet quitters fit Gallup's definition of being "not engaged" at work—people who do the minimum required and are psychologically detached from their job. This describes half of the U.S. workforce.

Is Quiet Quitting Real (Gallup)

I’m sure there are lots of factors going into Quiet Quitting, but I think this feeling I’m talking about is one of them.

What I think is actually happening

And that brings me to what I think the real issues are. They’re not pleasant, and might be rather jarring, but it’s time to have the conversation.

Here’s what I think the main problems are that are causing all this, and that are much bigger than AI.

The ideal number of employees in any company is zero. If a company could run and make money using no people, then that is exactly what it should do. We never think about this or talk about this because it’s very strange and uncomfortable, but it is true. The purpose of companies is not to employ people, it is to provide a product or service in return for money,

Because of that, there is a constant downward pressure on anybody who is employed. It is not a specific pressure from a specific person or department. It is simply a fact of business reality that manifests itself in various ways throughout an organization over time. We have to stop thinking of this as a malicious thing where we should be employed, and they are trying to get rid of us. The truth is exactly the opposite.

Nobody owes anybody a job. The only reason anyone has one is because there was a problem at some point in that business that required a human to do some part of the work. Building on that, if that ever becomes not the case, for a particular person or team or department of human employees, the natural next action is to get rid of them. Again, not because business owners or managers are bad people or anything like that. We need to stop injecting morality into this. Businesses simply should have as few employees and actually any expense as possible.

A good way to think about this is to look at a list of software products your company pays for. Let’s say your company pays for 215 software products that cost us $420,000 a year to own and use. Nobody would object to somebody looking at that list of software finding redundancies and canceling those licenses. That is simply work that is being done by other software, or is not required anymore, so it would be stupid for the business to not cancel or failed to renew those licenses.

It is exactly the same with humans, and no matter what you read or hear, I believe this is the main reason we are seeing disruption in job markets today. I think more and more businesses are seeing themselves as money in and money out, and are seeing human workers as being very expensive and generally not very good at what they do. This is not necessarily because of individual workers but because human organizations and communication are so inefficient and wasteful. So basically, companies are realizing that they are spending millions—or hundreds of millions—of dollars per year on human talent, and they’re realizing it’s not worth it.

So that is two pieces: 1) ZERO is the optimum number of employees for any company, and 2) companies are realizing that they’re paying way too much for giant workforces that are not producing near the value being paid. Forgetting any modern technological innovation, these two things combined produce extreme downward pressure on the workforce. It adds pressure, stress, drama, and all sorts of negativity to the practice of finding a job, getting a job, keeping a job, working with coworkers, going through organizational changes, and everything else that goes with being a regular employee. It just basically fucking sucks. And the reason it sucks is because companies ultimately wish that you didn’t exist in the first place. We have forgotten that—or never learned it—and that needs to change.

Now let’s add AI, which if you’ve read any of the stuff I’ve been writing, you know is— in the context of business —a technology for replacing the human intellectual work tasks that make up someone’s job.

Here’s a good example from a recent piece on this topic.

From ‘You’ve Been Thinking About AI All Wrong’

What this example shows is a workflow that a human worker does today—just like millions of similar workflows—but that AI will soon be able to do instead.

It’s just steps, like broken down further down in the piece.

From ‘You’ve Been Thinking About AI All Wrong’

So now we have three pieces. The ideal number of employees is zero, companies are extremely unhappy with their current workforces, and just now—starting in 2023 and 2024—it is actually becoming possible to replace human intelligence tasks with technology.

You have to see where this is going. It is not moving towards a few jobs get removed, and a few jobs get added. It’s not moving towards some gentle shifts in workforce dynamics or euphemisms like that. No, we are talking about fundamental change. Now for the main point of this piece.

The entire concept of work that we have had for thousands of years was a temporary model that was required to solve a temporary problem. Namely, people who are trying to build or sell something that required work they were unable to do by themselves. 

Read that again.

The only reason anybody has a job is because some people are builders and creators, and they cannot do the entire job themselves.

That work—which is required to produce those products and services—is the reason people have 9 to 5 jobs. This is the reason the entire economy works the way it does. Those builders/creators then hire people, who they have to pay, and those people spend that money on things in the economy. And that is the system we are all used to.

Well…

This system goes away when builders and creators are able to make things by themselves. Which is precisely what AI is about to enable.

So, here’s where we are.

The ideal number of employees for a company is zero.

The reason companies had employees in the past is ONLY because the founders couldn’t deliver their product/service without human workers.

Companies and society has sort of forgotten this over the past decades and it’s been kind of assumed that all companies should have these large workforces, because it’s the job of companies to provide good jobs to society.

This hasn’t been working for companies, and company leaders are now noticing that they’re not getting near the value they should be from most employees and teams.

So there’s already this realization sinking in, and then we are getting AI at the exact same time.

This means at the exact time that company leaders are looking very skeptically at their human resources spending, they’re being presented with an alternative.

OK, so maybe you’re thinking:


Holy crap—he’s right.


This is a horrible problem, and we are all screwed. What do I even do?


Yes, and no. I have three things to offer here that should make you feel somewhat better.

Three reasons for hope

But it’s not all bad. I have 3 reasons this is better than it sounds.

1. Those jobs sucked anyway

How many people do you know who work regular 9 to 5 jobs in a knowledge work environment who look forward to Monday? How many people, if you really stood back and looked at your life, think it’s good to spend most of your waking moments getting ready to work, dealing with dumbass work shit, all fucking day, and then trying to destress from that day, just so you can actually enjoy the few hours you have left to live your actual life?

All that so you can hopefully make it to Friday so you can have two days where you hopefully don’t have to think about that hellscape you call work.

Is that the way humans should live on their home planet? If advanced and benign aliens came and visited, and interviewed us, would they not see that as a primitive state of being? Of course they would.

Bullshit Jobs, by David Graeber

The thing that we are about to lose is not something we should cry over. We should be worried cause losing these jobs will be massively disruptive, and it’s stressful as hell to think about a completely changed future. But these Bullshit Jobs themselves are not something to cherish and remember.

2. Even fast things go slow

This transition will simultaneously happen very fast, but also pretty slowly. Even if there is advanced AGI in 2025 (which would be very fast), it still takes time for new technology to enter into companies and fully replace previous technology or humans.

So it will take a while, and that’s not even taking into account likely legislation that will slow it down even further based on how disruptive it is. So it’s not like half of the workforce will suddenly not have a job in 2026. It will be very fast, but not that fast.

3. What comes after will be much better

And finally—and best of all—what we will be left with afterwards, assuming we survive, will be a much better way to live.

That same AI that took our dumbass jobs away also has the potential to produce extraordinary abundance for humanity, freeing us up to use our days being human rather than bad biological precursors to AI workers.

We weren’t supposed to be moving paperwork, and sorting spreadsheets, and sending meeting invites, and writing computer code. It’s not what we were supposed to be doing.

What we are supposed to be doing is building and creating things for each other. Things that make each other‘s lives better, and richer, and more meaningful.

And that is exactly what we will do on the other side of all of this. I obviously don’t know our chances of making it to this other side, or if/when it happens, exactly how that will play out. That is impossible to know, but what I can tell you is that I am all in on that future, because it doesn’t make sense to me to live any other way.

Sure, the disruption might tear us apart and send us back to the Bronze Age. That’s possible too, but I choose to believe that we will make it out of this. We’ll get out by getting through. And we’ll emerge on the other side better for it.

Summary

The primary reason we’re seeing all this disruption in the job market is because we’ve been part of a mass delusion about the very nature of work.

We told ourselves that millions of corporate workforce jobs—that pay good salaries, have good benefits, and allow you to save for retirement—were somehow a natural feature of the universe.

In fact, that entire paradigm was just a temporary feature of our civilization, caused by builders and creators not being able to do the work required by themselves. And that’s going away.

But it’s ok.

Most of the jobs sucked anyway, and they took up most of the daily waking hours we were supposed to be spending with family and friends.

Plus even if this transition happens really fast, it still won’t be overnight. Big things take a while.

And most importantly—what waits for us on the other side is a better way to live. A more human way to live—where we identify as individuals rather than corporate workers and exchange value and meaning as part of a new human-centered economy.

My purpose in writing this is to give an alternative—and hopefully far more sense-making—explanation of the feelings you might’ve been feeling for a very long time. And to give you both some warning—and some hop—with which to move forward.

I don’t like to give such news without providing some sort of practical advice for what people can do.

I’ve oriented my life since the end of 2022 to thinking about this problem, providing ideas and frameworks around it, and have written hundreds of articles about it. But rather than give the standard “subscribe to my newsletter” response, I would just say that I’m easy to find. Website | X | Newsletter | Community | LinkedIn

We are going to get through this, and it will be much better once we do.

🫶

1  I’m talking about the knowledge work job market, like IT, etc., not physical or professional work, although I do think they’ll be affected soon as well.

2  I’m specifically speaking of the last few years, say, since 2020.


Powered by beehiiv
 •  0 comments  •  flag
Share on Twitter
Published on August 26, 2024 12:08

The Real Problem With the Job Market

.bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; } .bh__table_cell { padding: 5px; background-color: #FFFFFF; } .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; } .bh__table_header { padding: 5px; background-color:#F1F1F1; } .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }Table of Contents

The feeling

The symptoms

What I think is actually happening

Three reasons for hope

1. Those jobs sucked anyway

2. Even fast things go slow

3. What comes after will be much better

Summary

The feeling

If you’re like me, you’ve had this strange, uneasy feeling about the job market1 for a few years now.

The feeling is like a splinter in the brain—like something is deeply broken about the whole system, but I couldn’t grasp it or articulate it.

I’m writing this because I think I figured it out.

People talk a lot about how AI is going to replace millions of jobs, and how it will also create many more. I think that’s true, but I think there’s a lot more going on here than just AI. I think AI is an accelerant to all of this, but not the main issue.

I think the main factor is so elusive and depressing that it’s hard to even talk about, which is why we don’t.

The symptoms

First, the symptoms. It starts with noticing how bad—and how often—work just completely sucks. Like the whole thing. Finding work. Doing work. Being stressed about not losing work. Etc.2  

Most companies, departments, and teams are horribly inefficient, have very little direction, are full of wasted time and efficiency, and are poorly run. It’s constant meetings to talk about the latest corporate fuckery, which only prevents you from doing what you should be doing. It’s change for the sake of change. And when you get excited about a way to fix things, either nobody listens, or they only pretend to before failing to implement it.

So work tends to be a series of disappointments for most people, punctuated by a few rays of light.

I was curious about how many people felt this way—not wanting to do a full essay on just my own opinions, and found this from Gallup about the quiet quitting phenomenon.

Gallup: Is Quiet Quitting Real?, 2023


The overall decline was especially related to clarity of expectations, opportunities to learn and grow, feeling cared about, and a connection to the organization's mission or purpose—signaling a growing disconnect between employees and their employers.


Many quiet quitters fit Gallup's definition of being "not engaged" at work—people who do the minimum required and are psychologically detached from their job. This describes half of the U.S. workforce.

Is Quiet Quitting Real (Gallup)

I’m sure there are lots of factors going into Quiet Quitting, but I think this feeling I’m talking about is one of them.

What I think is actually happening

And that brings me to what I think the real issues are. They’re not pleasant, and might be rather jarring, but it’s time to have the conversation.

Here’s what I think the main problems are that are causing all this, and that are much bigger than AI.

The ideal number of employees in any company is zero. If a company could run and make money using no people, then that is exactly what it should do. We never think about this or talk about this because it’s very strange and uncomfortable, but it is true. The purpose of companies is not to employ people, it is to provide a product or service in return for money,

Because of that, there is a constant downward pressure on anybody who is employed. It is not a specific pressure from a specific person or department. It is simply a fact of business reality that manifests itself in various ways throughout an organization over time. We have to stop thinking of this as a malicious thing where we should be employed, and they are trying to get rid of us. The truth is exactly the opposite.

Nobody owes anybody a job. The only reason anyone has one is because there was a problem at some point in that business that required a human to do some part of the work. Building on that, if that ever becomes not the case, for a particular person or team or department of human employees, the natural next action is to get rid of them. Again, not because business owners or managers are bad people or anything like that. We need to stop injecting morality into this. Businesses simply should have as few employees and actually any expense as possible.

A good way to think about this is to look at a list of software products your company pays for. Let’s say your company pays for 215 software products that cost us $420,000 a year to own and use. Nobody would object to somebody looking at that list of software finding redundancies and canceling those licenses. That is simply work that is being done by other software, or is not required anymore, so it would be stupid for the business to not cancel or failed to renew those licenses.

It is exactly the same with humans, and no matter what you read or hear, I believe this is the main reason we are seeing disruption in job markets today. I think more and more businesses are seeing themselves as money in and money out, and are seeing human workers as being very expensive and generally not very good at what they do. This is not necessarily because of individual workers but because human organizations and communication are so inefficient and wasteful. So basically, companies are realizing that they are spending millions—or hundreds of millions—of dollars per year on human talent, and they’re realizing it’s not worth it.

So that is two pieces: 1) ZERO is the optimum number of employees for any company, and 2) companies are realizing that they’re paying way too much for giant workforces that are not producing near the value being paid. Forgetting any modern technological innovation, these two things combined produce extreme downward pressure on the workforce. It adds pressure, stress, drama, and all sorts of negativity to the practice of finding a job, getting a job, keeping a job, working with coworkers, going through organizational changes, and everything else that goes with being a regular employee. It just basically fucking sucks. And the reason it sucks is because companies ultimately wish that you didn’t exist in the first place. We have forgotten that—or never learned it—and that needs to change.

Now let’s add AI, which if you’ve read any of the stuff I’ve been writing, you know is— in the context of business —a technology for replacing the human intellectual work tasks that make up someone’s job.

Here’s a good example from a recent piece on this topic.

From ‘You’ve Been Thinking About AI All Wrong’

What this example shows is a workflow that a human worker does today—just like millions of similar workflows—but that AI will soon be able to do instead.

It’s just steps, like broken down further down in the piece.

From ‘You’ve Been Thinking About AI All Wrong’

So now we have three pieces. The ideal number of employees is zero, companies are extremely unhappy with their current workforces, and just now—starting in 2023 and 2024—it is actually becoming possible to replace human intelligence tasks with technology.

You have to see where this is going. It is not moving towards a few jobs get removed, and a few jobs get added. It’s not moving towards some gentle shifts in workforce dynamics or euphemisms like that. No, we are talking about fundamental change. Now for the main point of this piece.

The entire concept of work that we have had for thousands of years was a temporary model that was required to solve a temporary problem. Namely, people who are trying to build or sell something that required work they were unable to do by themselves. 

Read that again.

The only reason anybody has a job is because some people are builders and creators, and they cannot do the entire job themselves.

That work—which is required to produce those products and services—is the reason people have 9 to 5 jobs. This is the reason the entire economy works the way it does. Those builders/creators then hire people, who they have to pay, and those people spend that money on things in the economy. And that is the system we are all used to.

Well…

This system goes away when builders and creators are able to make things by themselves. Which is precisely what AI is about to enable.

So, here’s where we are.

The ideal number of employees for a company is zero.

The reason companies had employees in the past is ONLY because the founders couldn’t deliver their product/service without human workers.

Companies and society has sort of forgotten this over the past decades and it’s been kind of assumed that all companies should have these large workforces, because it’s the job of companies to provide good jobs to society.

This hasn’t been working for companies, and company leaders are now noticing that they’re not getting near the value they should be from most employees and teams.

So there’s already this realization sinking in, and then we are getting AI at the exact same time.

This means at the exact time that company leaders are looking very skeptically at their human resources spending, they’re being presented with an alternative.

OK, so maybe you’re thinking:


Holy crap—he’s right.


This is a horrible problem, and we are all screwed. What do I even do?


Yes, and no. I have three things to offer here that should make you feel somewhat better.

Three reasons for hope

But it’s not all bad. I have 3 reasons this is better than it sounds.

1. Those jobs sucked anyway

How many people do you know who work regular 9 to 5 jobs in a knowledge work environment who look forward to Monday? How many people, if you really stood back and looked at your life, think it’s good to spend most of your waking moments getting ready to work, dealing with dumbass work shit, all fucking day, and then trying to destress from that day, just so you can actually enjoy the few hours you have left to live your actual life?

All that so you can hopefully make it to Friday so you can have two days where you hopefully don’t have to think about that hellscape you call work.

Is that the way humans should live on their home planet? If advanced and benign aliens came and visited, and interviewed us, would they not see that as a primitive state of being? Of course they would.

Bullshit Jobs, by David Graeber

The thing that we are about to lose is not something we should cry over. We should be worried cause losing these jobs will be massively disruptive, and it’s stressful as hell to think about a completely changed future. But these Bullshit Jobs themselves are not something to cherish and remember.

2. Even fast things go slow

This transition will simultaneously happen very fast, but also pretty slowly. Even if there is advanced AGI in 2025 (which would be very fast), it still takes time for new technology to enter into companies and fully replace previous technology or humans.

So it will take a while, and that’s not even taking into account likely legislation that will slow it down even further based on how disruptive it is. So it’s not like half of the workforce will suddenly not have a job in 2026. It will be very fast, but not that fast.

3. What comes after will be much better

And finally—and best of all—what we will be left with afterwards, assuming we survive, will be a much better way to live.

That same AI that took our dumbass jobs away also has the potential to produce extraordinary abundance for humanity, freeing us up to use our days being human rather than bad biological precursors to AI workers.

We weren’t supposed to be moving paperwork, and sorting spreadsheets, and sending meeting invites, and writing computer code. It’s not what we were supposed to be doing.

What we are supposed to be doing is building and creating things for each other. Things that make each other‘s lives better, and richer, and more meaningful.

And that is exactly what we will do on the other side of all of this. I obviously don’t know our chances of making it to this other side, or if/when it happens, exactly how that will play out. That is impossible to know, but what I can tell you is that I am all in on that future, because it doesn’t make sense to me to live any other way.

Sure, the disruption might tear us apart and send us back to the Bronze Age. That’s possible too, but I choose to believe that we will make it out of this. We’ll get out by getting through. And we’ll emerge on the other side better for it.

Summary

The primary reason we’re seeing all this disruption in the job market is because we’ve been part of a mass delusion about the very nature of work.

We told ourselves that millions of corporate workforce jobs—that pay good salaries, have good benefits, and allow you to save for retirement—were somehow a natural feature of the universe.

In fact, that entire paradigm was just a temporary feature of our civilization, caused by builders and creators not being able to do the work required by themselves. And that’s going away.

But it’s ok.

Most of the jobs sucked anyway, and they took up most of the daily waking hours we were supposed to be spending with family and friends.

Plus even if this transition happens really fast, it still won’t be overnight. Big things take a while.

And most importantly—what waits for us on the other side is a better way to live. A more human way to live—where we identify as individuals rather than corporate workers and exchange value and meaning as part of a new human-centered economy.

My purpose in writing this is to give an alternative—and hopefully far more sense-making—explanation of the feelings you might’ve been feeling for a very long time. And to give you both some warning—and some hop—with which to move forward.

I don’t like to give such news without providing some sort of practical advice for what people can do.

I’ve oriented my life since the end of 2022 to thinking about this problem, providing ideas and frameworks around it, and have written hundreds of articles about it. But rather than give the standard “subscribe to my newsletter” response, I would just say that I’m easy to find. Website | X | Newsletter | Community | LinkedIn

We are going to get through this, and it will be much better once we do.

🫶

1  I’m talking about the knowledge work job market, like IT, etc., not physical or professional work, although I do think they’ll be affected soon as well.

2  I’m specifically speaking of the last few years, say, since 2020.


Powered by beehiiv
 •  0 comments  •  flag
Share on Twitter
Published on August 26, 2024 12:08

August 20, 2024

Aliens Landed in Palo Alto in October of 2027

.bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; } .bh__table_cell { padding: 5px; background-color: #FFFFFF; } .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; } .bh__table_header { padding: 5px; background-color:#F1F1F1; } .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }

On the 8th of October in 2027, an alien craft was seen entering the atmosphere over the Atlantic around 600 miles off the coast of Newfoundland.

It stayed very high while covering the United States and then descended quickly before landing in an open field near a water reservoir, in Palo Alto, California.

The craft was extremely spherical, somehow more than a sphere, and had a silverish shine to it that seemed to reflect and somehow improve the colors around it.   

At first, the military surrounded it and sent all sorts of drones and probes to go and inspect the craft, but within 18 hours the craft began sending messages on multiple frequencies.

It started by explaining that it was a representative of a distant civilization called The Aleta. That they started as biological life forms but transitioned into a unified (part bio part technology) form eventually, and that they were here to share information with Earth at a crucial time of its development.

Upon figuring out that the craft could not be approached, destroyed, or moved in any way—and after the craft sufficiently explained its technological superiority and benign intentions—humans started listening to what it had to say.

Aleta (that’s what everyone was calling it) sent all sorts of information. It evidently figured out how to read all of our media and look for problems and trivialities. It started sending helpful information about how to solve them.

Some days, it would send extra extraordinary recipes for how to make better muffins, using an extracted type of butter and a new type of salt. It also released the most breathtaking and spellbinding 48 peace fantasy series that quickly became the most talked about and brilliant piece of art ever created. It was called Altered Dominion. Think Game of Thrones, Shakespeare, Harry Potter, the Notebook, and The Sopranos all rolled into one—except 1000 times more compelling.

On other days, it would release explanations of fundamental science, including an actual unified theory of physics. Forumulas for new materials. And explanations of dark matter that we’ve never seen before. The Unified Theory was released on a Thursday, after it rained in Palo Alto for nearly 3 days straight, and after the season 35 finale of Altered Dominion.

But by the summer of 2029, and 2 1/2 years of constant talk and speculation worldwide, the mainstream conversation about Aleta began to change from wonder to disillusionment.

A small number of people—around 140,000 or so—were still stitched to every transmission that came from the craft, and they had actually learned how to speak with it in real-time. They oriented their lives around all of its messages and started working on how to incorporate the new science and art streaming from the craft.

This group, who called themselves ATLiens for some strange reason, literally organized sleeping shifts, transcribing shifts, and studying shifts, and then a whole discipline around the incorporation of the new knowledge into how humans currently do things. To them, the knowledge that had been sent just in the last two years would take several decades, if not a couple of centuries, to properly integrate into human society. It was just that much data, and it was just that transformational.

But to most people, once they had finished watching Altered Dominion and eating all the new recipes and trying on all the new outfits, they went back to TikTok.

By 2030, most people had forgotten about the strange sphere in Palo Alto. And more than forget about it, they took on a disappointed attitude towards it.

The difference between this group, which was most of humanity, and the cult-like ATliens could not be more extreme. Most of them moved to Palo Alto and surrounding areas in the Bay Area, just so they could be closer to others who understood the significance. They structured their lives around the regular broadcasts from Aleta and they could think of nothing else.

They studied every single piece of new information from Aleta, and figured out how to build new things using it. Many of them had trouble sleeping because they feared missing a transmission and all the wisdom contained within.

The difference in attitudes between the ATliens and normal people was captured well by an interaction between Sarah Meyer, a devoted ATLien from Rochester now living in Fremont, who was getting her morning coffee at a Starbucks in Palo Alto.

The barista asked her what all her books and computers were for, and why she always seems so excited when she came in. Sarah explained—a bit embarrassed—that she was one of “those people”, and that she comes in and gets her coffee, and then heads out to the reservoirs to receive new transmissions.

The barista wiped his hands, looked at her blankly, and said,


Oh yeah, is that thing is still out there? (laughing)


Yeah, I’ve seen Dominion four times, but I just kind of tuned out after that.


(checking a name on a cup and frowning slightly)


All this time we were waiting for aliens, and all they end up giving us is a new TV show.



Powered by beehiiv
 •  0 comments  •  flag
Share on Twitter
Published on August 20, 2024 09:41

UL NO. 446: AI Ecosystem Components, MS 0-Days, Iranian Campaign Hacks…

.bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; } .bh__table_cell { padding: 5px; background-color: #FFFFFF; } .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; } .bh__table_header { padding: 5px; background-color:#F1F1F1; } .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }

SECURITY | AI | MEANING :: Unsupervised Learning is a stream of original ideas, story analysis, tooling, and mental models designed to help humans lead successful and meaningful lives in a world full of AI .

TOC

NOTES

MY WORK

SECURITY

AI / TECH

HUMANS

IDEAS

DISCOVERY

RECOMMENDATION OF THE WEEK

APHORISM OF THE WEEK

NOTES

Hey!

Few things here to start out:

All better from being sick. Was quite minor. Would not even have known I was sick if not for testing.

We migrated Fabric to Go! It’s now easier to install, upgrade, and it’s way faster. INSTALL/MIGRATE

Joe Rogan had Peter Thiel on the podcast, and it was a brilliant conversation. One of the best podcasts of that type in months. MORE

I bought one of those mini-libraries to put in my neighborhood. Love the idea of sharing books with the local community!

Ok, let’s go…

Continue reading online to avoid the email cutoff… MY WORK

My new essay on the 4 components (not just the model weights!) that will decide who wins out of OpenAI, Anthropic, Meta, or Google.

The 4 Components of Top AI Model Ecosystems


The four things I think will determine who wins the AI Model Wars


danielmiessler.com/p/ai-model-ecosystem-4-components

A short essay on what I see as the root of a lot of “LLMs can’t reason” arguments.

The Link Between Free Will and LLM Denial


Denying the specialness of LLMs seems tied to over-believing in the specialness of humans.


danielmiessler.com/p/free-will-llms


SECURITY

Microsoft just released patches for 90 security flaws, including 10 zero-days, with six of those being actively exploited. Notable vulnerabilities include CVE-2024-38189 (RCE in Microsoft Project), CVE-2024-38178 (memory corruption in Windows Scripting Engine), and CVE-2024-38213 (SmartScreen bypass). MORE

Russian cyberspies from the FSB, along with a new group called COLDWASTREL, have been running a massive phishing campaign dubbed "River of Phish" targeting US and European entities since 2022. The campaign aims to steal credentials and 2FA tokens from high-risk individuals, NGOs, media outlets, and government officials. MORE

The Pentagon is planning to flood the Taiwan Strait with thousands of drones in the event of a Chinese invasion. US Indo-Pacific Command chief Admiral Samuel Paparo described the strategy as creating an "unmanned hellscape" to delay Chinese forces and buy time for US and allied reinforcements. Weird that we just tell people our strategies like this, though. MORE

Sponsor

The Next Big Thing in Automated Security Investigations  

Dropzone.ai is the the only company I’ve seen that has truly nailed the agent-driven approach to investigations. Or really Agents used in a cyber workflow.

What they do is take alerts that come from tools like PAN, and they start autonomously investigating them, just like a human analyst. This is where this is all going, and they’re the best I’ve seen. So much so that I’m now an advisor for them!!

By the way, if you’re interested in where this is all headed, check out this article on how Gartner just canceled SOAR. It’s a clear signal that companies like Dropzone are where things are going.

dropzone.ai/request-a-demo

Request a Demo

Jeff Sims has published a timeline of his research on offensive AI agents, detailing the development of three distinct types of offensive AI systems. MORE

SolarWinds has patched a critical deserialization vulnerability (CVE-2024-28986, CVSS 9.8) in its Web Help Desk software that could allow remote code execution. The flaw affects all versions up to 12.8.3 and has been fixed in hotfix 12.8.3 HF 1. MORE

Iranian banks have been hit by a massive cyber attack, reportedly one of the largest in the country's history. Seems likely tied to Israel/Iran tensions. MORE

Trump shared a fake image of Harris speaking at a Communist event. This one looks fairly fake, but 1) lots of people will still believe it’s real, and 2) current tech can already make more believable ones. We’re actually at the point I talked about here:

Iranian hacker group APT42 has targeted both Trump and Biden campaigns, according to Google's Threat Analysis Group. The group, believed to be working for Iran's Revolutionary Guard Corps, targeted both campaigns, but only Trump's campaign appears to have had sensitive files leaked to the press, which is quite curious. MORE

Trump corroborated this by pointing the finger at Iran for hacking his presidential campaign, praising the FBI's investigation into the breach. He mentioned that the FBI is handling it professionally and reiterated multiple times that Iran was behind it, though he didn't share specific details from the agency. MORE

Sponsor

ProjectDiscovery Cloud Platform Asset Discovery

Our latest release includes enhanced tech stack detection and universal asset discovery.

For Individuals & Bug Bounty Hunters: Discover and monitor up to 10 domains daily.

For Organizations: Uncover your external attack surface and cloud assets with automatic asset enrichment and daily monitoring.

Stay ahead with ProjectDiscovery Cloud Platform!

cloud.projectdiscovery.io

Discover Assets Today

China-linked cyber-spies have infected dozens of Russian government and IT sector computers with backdoors and trojans since late July, according to Kaspersky. The attacks, dubbed EastWind, are linked to APT27 and APT31, using phishing emails and cloud services like GitHub, Dropbox, and Quora for command-and-control. MORE

Scammers are targeting young Chinese job seekers in a tough economy, exploiting their desperation by offering fake job opportunities. MORE | Comments

Continue reading online to avoid the email cutoff… AI / TECH

xAI’s Grok chatbot now lets users create images from text prompts and publish them to X, leading to chaotic results like Barack Obama doing cocaine and Donald Trump in a Nazi uniform. Really curious if this is going to get nerfed or not. Elon replied to one that had him pregnant standing next to Trump, and he replied, “Live by the sword, die by the sword.” MORE

Alex Wieckowski is on a mission to make you fall in love with reading again—and he thinks AI can help. In this episode, Alex shares how he uses AI tools like ChatGPT to recommend books, understand deeper themes in novels like Hermann Hesse’s "Siddhartha," and create actionable strategies from business books like Alex Hormozi’s "$100M Offers." MORE

Comedians are increasingly using AI to help write jokes and brainstorm ideas, with mixed results. I think this is similar to the Turing Test in terms of the importance of AI progress. If AI can write a full set of comedy and make humans laugh, that’s f*cking huge. MORE

San Francisco is looking to ban software that critics claim is being used to artificially inflate rents. The software in question allegedly helps landlords coordinate rent increases. MORE

You might be overusing Vim visual mode. This post argues that many Vim users rely too heavily on visual mode (I think I’m one of them), which can often be replaced with more efficient normal mode commands. Examples include using gg"+yG instead of ggVG"+y to copy a whole file and dk instead of Vkd to delete the current and previous lines. MORE

HUMANS

Some California residents will soon be able to add their driver’s licenses and state IDs to Apple Wallet as part of a pilot program launching this fall. The program will allow 1.5 million participants to use mobile IDs for TSA screening at LAX and SFO. MORE

China's manufacturers are facing a financial crisis, with many going bankrupt due to a combination of weak demand, rising costs, and increased competition. MORE

Scientists at Fermilab have detected the first neutrinos using a prototype detector for the Deep Underground Neutrino Experiment (DUNE). MORE

Venture capitalists aren't looking for nice founders; they want risk-takers. Nate Silver highlights that 70% of the billionaires on the 2023 Forbes 400 list are self-made, often coming from modest backgrounds. MORE

There's a growing trend of Gen Z men becoming NEETs (Not in Employment, Education, or Training), with one in five young men under 25 unemployed and not actively looking for work. MORE

"Slow is smooth, smooth is fast" is a mantra deeply ingrained in Navy SEAL operations, emphasizing precision over haste. This principle helps SEALs execute high-stakes missions with minimal errors, as seen in Operation Neptune Spear. MORE

No one wants kids anymore, and it's not just you. This video dives into the reasons behind the declining birth rates, touching on economic pressures, changing societal values, and personal choices. MORE

Imposter syndrome often stems from systemic biases, not just self-doubt. Harvard Business Review highlights that many women experience this due to real exclusionary practices. MORE

This guy got fired and replaced by AI at Cosmos Magazine, and the management didn't tell anyone. They are using generative AI to write articles, possibly trained on their own authors’ work. MORE

I gave my kids a summer like mine in the 1980s – This parent decided to give her 10 and 5-year-old daughters a taste of a 1980s summer holiday, where boredom was common and self-entertainment was key. MORE

IDEAS

Here are a few ideas I’ve had recently that I haven’t written essays for yet.

The Ultimate Privilege
I think the ultimate privilege might be growing up in a stable household with two parents who give you a strong work ethic.

It trips me out how simple this is, and how the best advice is often like this. It’s the same with diet, exercise, relationships, and a million other things. The best advice is concise, wise, and generally hard to do. But it’s not a mystery.

I think the US—and the world—should lock in on this one thing: stable two-parent households that imbue a strong work ethic—and focus a lot of energy on getting to 100% on that metric.



The biggest market opening right now is for a product/platform that validates the authenticity of content coming from a creator or publisher.


All the providers of content are going to have to work with the providers of computing platforms to produce a signing and UX standard.


— ᴅᴀɴɪᴇʟ ᴍɪᴇssʟᴇʀ ⚙️ (@DanielMiessler)
10:14 PM • Aug 14, 2024




I used to think there was a big difference between somebody being weak and somebody being evil.


I now treat them mostly the same because the outcomes they manifest are mostly the same.


The only difference is that with a weak person I can try to make them strong.


— ᴅᴀɴɪᴇʟ ᴍɪᴇssʟᴇʀ ⚙️ (@DanielMiessler)
5:17 PM • Aug 18, 2024


DISCOVERY

🔥Fabric + Raycast — Will Chen shows how to integrate Fabric into Raycast! Very cool. I’m adding this myself, forcing me to switch back to Raycast. In fact, I think I might integrate it more deeply by hosting a set of these scripts within Fabric, so you can just point Raycast to that directory! MORE

Eric Schmidt of Google did a crazy honest interview at Stanford and it was so spicy that Stanford took it down. Here’s the video and transcript. VIDEO | TRANSCRIPT | FABRIC SUMMARY

The Ideal Founding Team — Ben Horowitz lays out the perfect founding team in the clearest way I’ve ever seen. MORE

Scrape-it-now — A new CLI tool designed for AI-driven web scraping that ensures idempotency. MORE

Grok 2 — xAI has released Grok 2, a frontier class model capable of reasoning, coding, and mathematics. It also brings FLUX to X users in collaboration with Black Forest Labs. MORE

Prompt Caching With Claude — Anthropic has introduced prompt caching for its Claude models, allowing developers to cache frequently used context. Coming to Fabric soon! MORE

Flux AI — By Black Forest Labs, Flux.ai is a new open-source AI image generation tool that runs on consumer-grade laptops. It excels in rendering people and prompt adherence, outperforming competitors like Midjourney in some aspects. MORE

GraphicInfo – A new website lets you generate infographics to make your articles more engaging. MORE

"Agile Is for Losers" is a rant about the author's decade-long frustrations with the Agile methodology infiltrating digital agencies. MORE

RECOMMENDATION OF THE WEEK

Stop accepting it when your loved ones—especially the young ones—are not AI-literate. Here’s the way to think about this…

Imagine that the competition level for getting top jobs, mates, whatever—was at 100 in 2022. And the average person was at like an 80.

Well, AI is Augmentation technology. It adds 20-50 points to people who get good at it. So now that person with an 85 learns AI and they’re a 125.

The new standard is now reset to 120.

So if you were a 90 before, or a 110, you’re now behind.

Don’t let your people get left behind. AI is the new reading. It’s the new high school diploma. It’s the new degree.

Make sure the people you love have it.

(And just to show you how real this is, and get you motivated—here’s an 8-year-old doing some live coding) MORE

APHORISM OF THE WEEK
Become a Member to Augment yourself
Powered by beehiiv
 •  0 comments  •  flag
Share on Twitter
Published on August 20, 2024 08:40

August 19, 2024

The 4 Components of Top AI Model Ecosystems

.bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; } .bh__table_cell { padding: 5px; background-color: #FFFFFF; } .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; } .bh__table_header { padding: 5px; background-color:#F1F1F1; } .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }Table of Contents

The Model

Post-training

Internal tooling

Agents

Summary

I have been thinking a lot about the competition between OpenAI, Anthropic, Meta, and Google for who has the best pinnacle AI model.

I think it comes down to 4 key areas.

The Model Itself

Post-training

Internal Tooling

Agent Functionality

Let’s look at each of these.

The Model

The model is obviously one of the most important components because it it’s the base of everything.

So here we’re talking about how big and powerful the base model is, e.g., the size of the neural net. This is a competition around training clusters, energy requirements, time requirements, etc. And each generation (e.g., GPT 3→4→5) it gets drastically more difficult to scale.

So it’s largely a resources competition there, plus some smart engineering to use those resources as efficiently as possible.

But a lot of people are figuring out now that it’s not just the model that matters. The post-training of the model is also super key.

Post-training

Post-training refines and shapes model knowledge to enhance its accuracy, relevance, and performance in real-world applications.

I think of it as a set of highly proprietary tricks that magnify the overall quality of the raw model. Another way to think of this is to say that it’s a way to connect model weights to human problems.

I’ve come to believe that post-training is pivotal to the overall performance of a model, and that a company can potentially still dominate if they have a somewhat worse base model but do this better than others.

I’ve been shouting from the rooftops for nearly two years that there is likely massive slack in the rope, and that the stagnation we saw in 2023 and 2024 around model size will get massively leaped over by these tricks.

Post-training is perhaps the most powerful category of those tricks. It’s like teaching a giant alien brain how to be smart , when it had tremendous potential before but no direction.

So the model itself might be powerful, but it’s unguided. So post-training teaches the model about the types of real-world things it will have to work on, and makes it better at solving them.

So that’s the model and post-training, which are definitely the two most important pieces. But tooling matters as well.

Internal tooling

What we’re seeing in 2024 is that the connective tissue around an AI model really matters. It makes the models more usable. Here are some examples:

High-quality APIs

Larger context sizes

Haystack performance

Strict output control

External tooling functionality (functions, etc)

Trust/Safety features

Mobile apps

Prompt testing/evaluation frameworks

Voice mode on apps

OS integration

Integrations with things like Make, Zapier, n2n

Anthropic’s Caching mode

Just like with pre-training, these things aren’t as important as the model itself, but they matter because things are only useful to the extent that they can be used.

So, Tooling is about the integration of AI functionality into customer workflows.

Next lets talk about Agents.

Agents

Right now AI Agent functionality is mostly externally developed and integrated. There are projects like CrewAI, Autogen, Langchain, Langraph, etc., that do this with varying levels of success.

But first—real quick—what is an agent?

So basically, an AI Agent is something that emulates giving work to a human who can think, adjust to the input given, and intelligently do things for you as part of a workflow.

I think the future of Agent functionality is to have it deeply integrated into the models themselves. Not in the weights, but in the ecosystem overall.

In other words, we soon won’t be writing code that creates an Agent in Langchain or something, which then calls a particular model and returns the results to the agent.

Instead, we’ll just send our actual goal to the model itself, and the model will figure out what part needs agents to be spun up, using which tools (like search, planning, writing, etc.) and it’ll just go do it and give you back the result when it’s done.

This is part of this entire ecosystem story. It’s taking pieces that are external right now (Agent Frameworks), and brings that internal to the native model ecosystem.

Summary

We should start thinking about top AI models as Model Ecosystems rather than just models because it’s not just the neural net weights doing the work.

There are four (4) main components to a Model Ecosystem—the Model itself, Post-training, Internal Tooling, and Agent functionality.

#1 (The model) is the most well-known piece, and it’s largely judged by its size (billions of parameters).

#2 (Post-training) is all about teaching that big model how to solve real-world problems.

#3 (Internal Tooling) is about making it easier to use a given model.

#4 (Agent functionality) emulates human intelligence, decision-making, and action as part of workflows.

The company that wins the AI Model Wars will need to excel at all four of these, not just spending lots of money to have the neural net with the most parameters.

NOTES

Thanks to Jai Patel for informing many thoughts on this, especially around pre-training.


Powered by beehiiv
 •  0 comments  •  flag
Share on Twitter
Published on August 19, 2024 13:27

August 18, 2024

The Link Between Free Will and LLM Denial

.bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; } .bh__table_cell { padding: 5px; background-color: #FFFFFF; } .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; } .bh__table_header { padding: 5px; background-color:#F1F1F1; } .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }

I think a hidden tendency towards a belief in Libertarian free will is at the root of people’s opinion that LLMs aren’t capable of reasoning.

I think it’s an emotional and unconscious argument that humans are special, and that by extension—LLMs cannot possibly be doing anything like we are doing.

But if you remember that humans don’t have free will, and that all of our outcomes are either determined or random, it allows us to see LLMs more like us. Which is to say—imperfect but awesome. And then we can switch to speaking purely in terms of capabilities.

So let us say that we’re both deterministic. Or at least mechanistic and practically deterministic because any quantum randomness collapses to deterministic at large scales.

In this model both humans and LLMs are just processors. We're computational devices. We take in inputs, and based on our current state and the state of the environment and the input, we output something.

Cool. So what’s the real question we’re then asking when we ask if LLMs can reason?

First let's remember something. We’re not taking back the human ability to reason just because we are processors, right? No. Let’s not do that. We're still awesome even if we're mechanistic.

In other words, let’s say for the purpose of this that reasoning is consistent with mechanistic/deterministic processing.

Now, let’s find a good definition. Here are some from Merriam-Webster.


REASONING — The use of reason. especially : the drawing of inferences or conclusions through the use of reason. 2. : an instance of the use of reason : argument.

Merriam-Webster

REASON — The ability to think, understand, and form judgments by a process of logic.

Merriam-Webster

LOGIC — A science that deals with the principles and criteria of validity of inference and demonstration.

Merriam-Webster

Ok, so if we take these all the way down to the base and build back up:

Principles of validity and inference and demonstration

The ability to think, understand, and form judgements based on that

So,


The ability to think, understand, and form judgements around the principles of validity and inference and demonstration.

My smashing these together

Seems pretty good. And then you have a more common definition based on practicality which is something like:


Reasoning is the process of drawing conclusions, solving problems, and making decisions through logic.

A commonly-accepted functional definition

Regardless of which way we go, we have a couple key sticking points. And they're very tied to my main argument here.

First, the words "think" and "understand"—I would argue—are very much tied to consciousness and Libertarian Free Will. I see these as armaments that LLM-Reasoning skeptics would use to show why LLMs can't be reasoning.

I see them saying something like:


Reasoning means feeling through things. Thinking about them. Pondering them. Grappling with them. And then taking all the person's experience, and the rules of logic, and their understanding of things, plus their intuition, and turning that into an opinion, or a determination, or a decision.

A common argument I hear from LLM-Reasoning skeptics

Sounds compelling, but if you break it apart I would argue they're unconsciously binding and confusing experience and understanding vs. actual processing.

In other words, I think they're saying that the thinking and understanding parts are key. As in the human experience of understanding and pondering. They're smuggling these in as essential, when I think they're just red herrings.

Same with "grappling" and "intuition". If we don't have free will, these are all just states of the processing mind that are happening, and our subjective experiences are then being presented with those phenomenon and we're ascribing agency to them.

That's thinking. That's intuition. That's experience. And I think understanding is the same. It's an experience of seeing mappings between concepts and ideas. But in my model the mapping can exist without that subjective experience.

So, I say we take those distractions out of the equation and see what we have left. And what we have left is drawing conclusions, solving problems, and making decisions based on our current model of the world.

The model of the world is the weights that make up the LLM, combined with the context given to it at inference. So it seems to me like we're left with a much simpler question.

Can LLMs draw conclusions, solve problems, and make decisions based on their current model of the world?

I don't see how anyone would say no to that.

Are they perfect? No. Are they conscious? No. Are they "thinking"? I think "thinking" smuggles in subjective experience, so no. But again—those are distractions.

The question is whether LLMs can do this very practical thing that matters in the world, which is drawing conclusions, solving problems, and making decisions.

I think the answer is overwhelmingly and obviously, yes.

As a quick set of examples, we're already using them to:

Identifying dangerous moles on people that otherwise might have gone undiagnosed

Dealing with customer service problems by analyzing cases and tone and coming up with solutions that best help the company and customer

Talking through problems and identifying possible causes and solutions in mental health therapy

Assisting in legal research by analyzing case law and suggesting relevant precedents

Diagnosing diseases by analyzing medical images, such as identifying pneumonia in chest X-rays

Optimizing supply chains by predicting demand and suggesting inventory adjustments

Automating financial trading by making decisions based on market data analysis

Improving cybersecurity by identifying potential threats and suggesting mitigations

Personalizing marketing by predicting customer preferences and tailoring recommendations

Enhancing customer service through chatbots that resolve issues based on previous interactions

Detecting fraudulent transactions by analyzing patterns in financial data

Predicting equipment failures in manufacturing through analysis of sensor data

Assisting in drug discovery by predicting molecule interactions and potential outcomes

And a thousand more that we're already familiar with.

Some might say they're not doing "real" things, but just pattern matching and autocompletion.

That's the whole point of what we've been talking about here. That's the whole reason we've explored the argument in this way. We live in a human world where humans have problems and need to solve them.

That’s what logic and reasoning are for.

So what if it's just pattern matching? So what if it's just input + current_state = output. Are humans really all that different? Are we not just as surprised when inspiration—or the very next thought—pops into our minds?

Either way it's a black box information processor with physical limitations.

I think what matters is capabilities. And where capabilities are concerned, LLMs seem remarkably similar and catching up every day.


Powered by beehiiv
 •  0 comments  •  flag
Share on Twitter
Published on August 18, 2024 15:21

August 13, 2024

UL NO. 445: Vegas Dump, Legal Firm Hacks, AI Agent Ascension

.bh__table, .bh__table_header, .bh__table_cell { border: 1px solid #C0C0C0; } .bh__table_cell { padding: 5px; background-color: #FFFFFF; } .bh__table_cell p { color: #2D2D2D; font-family: 'Helvetica',Arial,sans-serif !important; overflow-wrap: break-word; } .bh__table_header { padding: 5px; background-color:#F1F1F1; } .bh__table_header p { color: #2A2A2A; font-family:'Trebuchet MS','Lucida Grande',Tahoma,sans-serif !important; overflow-wrap: break-word; }

SECURITY | AI | MEANING :: Unsupervised Learning is my continuous stream of original ideas, story analysis, tooling, and mental models designed to help humans lead successful and meaningful lives in a world full of AI .

TOC

NOTES

MY WORK

SECURITY

AI / TECH

HUMANS

IDEAS

DISCOVERY

RECOMMENDATION OF THE WEEK

APHORISM OF THE WEEK

NOTES

Hey there!



Can we interest you in a newsletter?


@DanielMiessler@mikepsecuritee@clintgibler


— Matt Johansen (@mattjay)
1:09 AM • Aug 7, 2024


Was super cool meeting up with folks at Blackhat and DEFCON, and getting to meet people like Dhruv at my Recon Village talk.



is one of the awesome folk in the cybersec industry. I love his unsupervised learning newsletter, his work. You should checkout his GitHub too. I had the privilege of meeting him at @ReconVillage, he is an awesome human being.


#Defcon#cybersec


— Dhruv Shah (@Snypter)
2:24 AM • Aug 12, 2024


Now back from Vegas. Sick, as expected. But not bad at all.

Worth it!

The highlight of my 6 months in Vegas was—without question—our in-person UL Dinner. 20-something members all together, talking for 2 hours. Most importantly I got to meet Tim Leonard for the first time. Tim is one of the centerstones of the UL community, and we’ve become good friends over multiple years but never met. Was so great to fix that.

🤯 Ok, TONS OF STORIES to share this week, so it’s going to be kind of a giant DISCOVERY section type of vibe.

Actually, so much so that you might as well just click this button now.

Continue reading online to avoid the email cutoff…

Let’s go…

MY WORK

Lots in the queue…

SECURITY

Since 2018, 138 legal firms globally have confirmed ransomware attacks affecting 2.9 million records. MORE

💡Attacking legal firms has always been super interesting to me, just because of the sheer amount of drama they deal with. Mergers, acquisitions, suits, contracts, relationships, fights, disputes, etc. It’s a lot of high-value information.

It also (surprise) highlights a massive attacker use-case for AI.

The problem with compromising a giant law firm’s files is that there could be hundreds of thousands of pages of crap in there. And if you wanted to go through it and look for juicy stuff for extortion, blackmail, ransom, etc.—it’d take a ton of people running grep like 1990’s lawyers with boxes of paperwork and pizza takeout.

But not with AI. Now you can put all those docs into a local Chroma database (vectorized), and ask it questions using an uncensored (and perhaps even fine-tuned) version of Llama 3.1.

So now, with some smart prompting, you can ask a set of 25 questions to such a dataset that pull out ALL your attack use-cases. And hell—even write the attack emails for you.

And for defense the methodology is very similar. Do the same thing—to yourself—and those emails that come out become your likely attack scenarios. So you go and clean them up (prevent) or prepare responses (response) for if they happen.

Thousands of hackers and security pros gathered at Black Hat and Def Con 2024 to share the latest in security research. Highlights included hacking Ecovac robots to spy on owners, Jon DiMaggio doxing the LockBit ransomware leader, and Samy Kamkar's laser microphone that can hear keyboard taps. Other notable research showed how prompt injections can trick Microsoft Copilot and how Vangelis Stykas saved six companies from ransomware by exploiting flaws in leak sites. MORE

Sponsor

2024 Gartner® Market Guide for CNAPP

Find recommendations for evaluating and adopting a CNAPP in the 2024 Gartner® Market Guide for CNAPP

Read the report to learn:

The benefits of a CNAPP solution in your cloud security strategy

Key capabilities and characteristics to look for in a CNAPP, including deep relationship graph analytics expertise 

Recommendations for how you should approach a CNAPP evaluation and deployment

 wiz.io/lp/2024-gartner-market-guide-for-cnapp

Get the Report

CISA appointed Lisa Einstein as its first Chief Artificial Intelligence Officer to advance cybersecurity efforts in using AI responsibly.

Checkmarx researchers found an infostealer campaign targeting Raydium and Solana blockchain users by spreading malicious PyPi packages through StackExchange answers. MORE

A new Android trojan called BlankBot is targeting Turkish users by posing as utility apps and tricking them into granting permissions. BlankBot can log device information, steal sensitive data, and perform custom injections. MORE

A critical security bypass vulnerability (CVE-2024-6242) has been found in Rockwell Automation ControlLogix 1756 devices, allowing attackers to execute CIP programming and configuration commands. MORE

Sponsor

13 Cybersecurity Tools. One Platform. Built for IT Teams

There are thousands of cybersecurity point solutions. Many of them are good—but managing more than a dozen tools, disparate reports, invoices, trainings, etc. is challenging for small IT teams.

We’ve built a platform that does assessments, testing, awareness training, and 24/7/365 managed security all in a single pane of glass. Because every company deserves robust cybersecurity.

 defendify.com

Book A Demo

Here’s a 7-stage roadmap for ramping up in AWS pentesting, starting with solving Red Team labs and progressing to automating exploits for CTFs and building secure AWS environments with CloudSLAW. MORE

An argument to use secure guardrails instead of traditional “shift-left” gates. Secure guardrails integrate directly into developer workflows, offering autofixes or advice that aligns with organization policies while empowering developers to write more secure code. MORE

2.7 billion personal records, including Social Security numbers, were leaked from National Public Data on a hacking forum. MORE

A Chinese hacking group named StormBamboo compromised an ISP to inject malware into software updates by exploiting insecure update mechanisms. MORE

A Russia-linked group used a car-for-sale phishing lure to target diplomats with a Windows backdoor called HeadLace. MORE

China-linked hackers known as Evasive Panda compromised an ISP to push malicious software updates to target companies in mid-2023. MORE

INTERPOL recovered over $40 million stolen in a BEC attack on a company in Singapore using a global stop-payment mechanism. MORE

Tavis Ormandy has dissected the CrowdStrike incident, providing a detailed analysis of the vulnerabilities exploited and the attack vectors used. MORE

The U.S. is planning to ban Chinese software in autonomous and connected vehicles due to national security concerns. MORE

Federal prosecutors have indicted North Korean hacker Rim Jong Hyok for ransomware attacks on American health care facilities, using the proceeds to fund espionage against U.S. military and defense contractors. MORE

Trail of Bits performed an audit of the popular macOS package manager, Homebrew and found several issues in the brew CLI that could allow for unsandboxed, local code execution. MORE

The White House is launching a new office under the Department of Homeland Security to study and secure open source software in critical infrastructure. MORE

Former President Trump's campaign confirmed it was hacked, with Microsoft attributing the attack to Iranian cyber-enabled influence operations. MORE

NCC Group researchers found vulnerabilities in Sonos smart speakers that allow remote code execution and potential eavesdropping. Sonos has patched these vulnerabilities, so updating your drivers is recommended. MORE

Some researchers deployed canary tokens (fake AWS credentials) in public online locations to study threat actor behavior. MORE

Russian drones are using fiber-optic cables to avoid radio jamming, a surprising twist in drone warfare. MORE

China is stockpiling critical resources like lithium, copper, and food in preparation for potential conflicts and economic disruptions, especially with the possible return of Trump and his unpredictable policies. MORE

Continue reading online to avoid the email cutoff… AI / TECH

AI agents that perform tasks instead of humans are closer than we think. According to Capgemini, by 2025, AI-powered agents will be working together to resolve issues in a multi-agent system. They believe these agents will handle everyday tasks. MORE

💡Um, yeah. This is real AI, as I talk about basically every week.

AI's Predictable Path


Technological progress isn't predictable, but the human desires that drive it are…


danielmiessler.com/p/ai-predictable-path-7-components-2024


Cisco's new State of Industrial Networking Report highlights that AI and cybersecurity are the top investment priorities for industrial organizations. MORE

💡New rule: From now on, whenever you hear someone is “INVESTING IN AI”, replace that in your head with:

"So and so is ‘INVESTING IN TENS OF THOUSANDS OF SMART DEPENDABLE WORKERS THAT DO THINGS AS WELL OR BETTER THAN MOST HUMANS BUT COST A FRACTION OF THE COST’.

Turns out, everyone needs that.

The FCC has proposed new regulations requiring AI-generated voice calls to disclose their artificial nature at the beginning of calls. Cool, but how do you enforce it? MORE

Uber's Q2 results emphasized its growing AV segment, highlighting a 6x rise in autonomous trips year-over-year and partnerships with AV leaders like Waymo and Alphabet. MORE

A bunch of AI startups that raised billions last year are now struggling and looking to Big Tech for bailouts. A lot of people are saying this is the end of AI hype, and that it’s about to crash now. I think they’re very wrong. Those companies will pop, but that has nothing to do with the actual trend. MORE

Meta is reportedly offering millions to celebrities like Awkwafina, Judi Dench, and Keegan-Michael Key to use their voices in upcoming AI projects. MORE

OpenAI guarantees structured outputs in API responses with the latest version of GPT-4o, which now follows the provided schema with 100% accuracy and is 50%/33% cheaper for inputs and outputs. MORE

Microsoft and Palantir have partnered to deliver advanced AI, including GPT-4, and analytics capabilities to U.S. Defense and Intelligence agencies through classified cloud environments. Palantir is a bit radioactive, so I won’t be surprised if this gets a lot of hate / scrutiny. MORE

AWS Bedrock has achieved FedRAMP High authorization, allowing GovCloud users to access managed LLMs. MORE

Sam Altman posted a seemingly innocent picture of strawberries on X, sparking rumors about a new OpenAI foundation model codenamed "Strawberry." Seriously good marketing. MORE

OpenAI just led a $60M funding round for Opal, a startup making high-end webcams. This is fascinating. Like what else is going on there? MORE

Snowflake is looking to boost its revenue by partnering with Canadian AI model developer Cohere. Data + AI? Who knew? MORE

Anduril Industries, the AI weapons startup founded by Palmer Luckey, is now valued at $14 billion after a recent funding round. MORE

WeRide, a Chinese autonomous vehicle startup, is gearing up for a U.S. IPO by registering 1 billion American Depository Receipts (ADRs) at $0.05 each, totaling $50 million. I’m very much pro-competition, but I’d much rather support Tesla, Waymo, Uber autonomous vehicles than a Chinese version. I think we should actively ban them from operating here if they try to. MORE

YouTube is testing a new feature called Brainstorm with Gemini, which integrates Google Gemini to help creators brainstorm video ideas, titles, and thumbnails. MORE

Anthropic is expanding its bug bounty program to crowdsource security for its AI safety systems. MORE

Cloudflare is rolling out Automatic SSL/TLS to enhance security between Cloudflare and origin servers without manual configuration. MORE

Groq just raised $640M in a Series D round to meet the growing demand for fast AI inference, bringing their valuation to $2.8B. MORE

Billions of dollars in venture capital are pouring into defense-tech startups, with a focus on futuristic, AI-enabled weapons. MORE

X (formerly Twitter) is reportedly shutting down its San Francisco office in the next few weeks. It’s moving to the South Bay. MORE

China's total wind and solar capacity has now surpassed its coal capacity, according to Rystad Energy. MORE

The NFL is rolling out facial recognition tech from Wicket across all 32 teams to streamline and secure credentialing for staff, media, and fans. MORE

The "Experts Roundtable" Prompt simulates a consulting session with top experts, helping you make important decisions for free. MORE

Alex Plescan shares his journey from iTerm 2 to WezTerm, highlighting the terminal's powerful API and Lua-based configuration. I might be switching from Kitty myself. We’ll see. MORE

iOS 18 expands its ambient noise lineup with two new sounds: Fire and Night. We sleep with Ocean quite a bit. MORE

HUMANS

Dell just laid off around 12,500 people, which is about 10% of its workforce. MORE

The US wrapped up the Paris 2024 Olympics with 40 gold medals, tying China for the most golds but leading the overall medal count with 126. The women's basketball team clinched the final gold by narrowly defeating France, marking their eighth consecutive Olympic gold. MORE

Poetry was an official Olympic event for nearly 40 years, starting with the 1912 Stockholm Games, where Pierre de Coubertin's "Ode to Sport" won the first gold medal. MORE

Ukrainian forces have advanced 9 miles into Russia's Kursk Oblast, marking their largest incursion since the war began. MORE

The U.S. is ramping up its military cooperation with Japan in response to rising tensions with China. MORE

Putin has signed a new law that requires bloggers with over 10,000 subscribers to register with Roskomnadzor (RKN) and provide their information. MORE

A Russian chess player, Amina Abakarova, allegedly tried to poison her opponent, Umayganat Osmanova, with liquid mercury during a tournament in Dagestan. MORE

Russia's deep-cover spies, known as "illegals," live under false identities for years, infiltrating target regions and building complete false lives. Like the Americans! Best spy show ever, maybe. MORE

Some companies are using return-to-office mandates to make employees quit, and it’s causing higher-than-expected attrition rates, especially among women and underrepresented groups. Remember what we’ve been saying: companies want all-in cult members. Lower head count is a good thing for them. MORE

Curtis Yarvin, a far-right thinker, has been gaining influence among Silicon Valley's extreme factions, including billionaires Peter Thiel and Marc Andreessen. MORE

The Anatomy of Brainwashing dives into the psychological mechanisms behind brainwashing, exploring how techniques like isolation, repetition, and emotional manipulation can alter a person's beliefs and behaviors. MORE

Susan Silk and Barry Goldman introduce the Ring Theory, a method to help people avoid saying the wrong thing during a crisis. MORE

Andrej Karpathy tweeted that Reinforcement Learning from Human Feedback (RLHF) is only marginally related to actual reinforcement learning. MORE | Comments

Private-equity firms taking over hospitals leads to significant asset stripping, reducing the facilities' ability to care for patients, according to a study by UCSF, Harvard Medical School, and CUNY. MORE

🔭 Set your alarm for 4:00 A.M. on August 14 to catch a rare celestial event: Mars and Jupiter will appear as a double star in the sky, and you might also see some Perseid meteors. MORE

Nepal is deploying DJI drones to transport garbage from Everest, aiming to reduce the risks Sherpas face in the dangerous Khumbu Icefall. MORE

IDEAS


I want to say something about the woman that’s being made fun of for her performance in Breakdancing in the Olympics.


It’s weak sauce to make fun of people for doing a bad job when they’re young, or just starting, or have some sort of disadvantage.


It’s just mean.


What… x.com/i/web/status/1…


— ᴅᴀɴɪᴇʟ ᴍɪᴇssʟᴇʀ ⚙️ (@DanielMiessler)
7:52 PM • Aug 12, 2024




Businesses Idea Quality (BIQ) =


(The scale * severity of the pain you are addressing)


X


(The uniqueness * elegance of your solution)



There are four values in this equation, and it’s all multiplication.


So as any one of the four goes to zero, so does the whole product.


— ᴅᴀɴɪᴇʟ ᴍɪᴇssʟᴇʀ ⚙️ (@DanielMiessler)
6:52 PM • Aug 12, 2024




AI is not a thing itself—it’s a magnifier of human things.


So as an AI enthusiast or investor, don’t look at the tech. Look for magical experiences.


The demo of the product should produce an emotional reaction while the AI itself is completely invisible.


— ᴅᴀɴɪᴇʟ ᴍɪᴇssʟᴇʀ ⚙️ (@DanielMiessler)
5:13 PM • Aug 11, 2024


One of my favorite AI/Cyber ideas from my friends Joel Parish and Gabe Bernadette-Shapiro:



Summarization is the most dangerous cyber threat.


In my opinion the most dangerous LLM cyber capability is summarization. It is the most effective, the most affordable, and the easiest to scale and add to existing operations.


Why summarization?


Well there’s a lot bundled in… x.com/i/web/status/1…


— Gabe (@Gabeincognito)
9:31 PM • Aug 11, 2024


lol



Currently testing negative, but pretty sure 90+% of us Vegas security peeps will have Covid by Monday.


— ᴅᴀɴɪᴇʟ ᴍɪᴇssʟᴇʀ ⚙️ (@DanielMiessler)
4:31 AM • Aug 9, 2024


DISCOVERY

Huntsman - Helps you gather email addresses, generate usernames, and validate context using popular services like hunter.io, snov.io, and skrapp.io. MORE

"Don't model the problem" is a video that explores an alternative approach to programming by focusing on solving problems directly rather than creating complex models. MORE

The real "Wolf of Wall Street" sales script. MORE

Git-metrics — Lets you attach, replace, or remove metrics for a given commit directly within your Git repository. MORE

Prompt Airlines is an AI Security CTF with 5 levels of increasingly difficult challenges, aiming to manipulate an AI chatbot to get a free airline ticket. MORE

Dioptra is a software test platform for assessing the trustworthy characteristics of AI, ensuring it is valid, reliable, safe, secure, resilient, accountable, transparent, explainable, interpretable, privacy-enhanced, and fair with managed harmful bias. MORE

urlhunter is a recon tool that enables search on URLs exposed via shortener services like bit.ly and goo.gl. MORE

Figure 02 – Figure has released its newest humanoid robot with enhanced intelligence and a sleeker design. MORE

Neighborbrite — Get instant landscaping inspiration for your yard. MORE

LangGraph Engineer — This alpha version agent helps bootstrap LangGraph applications by creating the correct nodes and edges, but leaves the logic to you. MORE

TrailShark – A tool that integrates AWS CloudTrail logs with Wireshark for real-time analysis of API calls. MORE

AWS Reasonable Account Defaults – A CloudFormation template to create reasonable account defaults around Cost Surprise Alerting. MORE

WireGuard-rs – There's now an official Rust implementation of WireGuard, which promises to bring the same secure VPN capabilities with the added benefits of Rust's safety and performance features. MORE

Developing CLIs — A detailed guide on building Command Line Interfaces (CLIs) using Go, focusing on best practices and practical tips. MORE

"Go is my hammer, and everything is a nail" explores the author's journey of using Go for almost every project, regardless of its suitability. MORE

Things I've Learned Building a Modern TUI Framework — Will McGugan shares insights from developing Textual, a modern Text User Interface (TUI) framework. MORE

RECOMMENDATION OF THE WEEK

If / when you feel overwhelmed by content, remember what Riva Tez said on the to David Perell a long time ago:

"You can't necessarily think yourself into the answers. You have to create space for the answers to come to you."

In other words, use one or more of these techniques to clear your mind:

News fast

Physical books only for 2 weeks

Take a nature-only vacation

Information fast

Dopamine fast

Etc.

Then come back fresh and redo your inputs to make sure they’re not overwhelming and/or noisy.

Repeat in 6 months.

APHORISM OF THE WEEK
Become a Member to increase your audio reading speed from 1.5x to 1.75x
Powered by beehiiv
 •  0 comments  •  flag
Share on Twitter
Published on August 13, 2024 15:40

Daniel Miessler's Blog

Daniel Miessler
Daniel Miessler isn't a Goodreads Author (yet), but they do have a blog, so here are some recent posts imported from their feed.
Follow Daniel Miessler's blog with rss.