H
HN HRCB top | articles | domains | dashboard | models | factions | about | exp
home / fortune.com / item 47055979
-0.02 AI adoption and Solow's productivity paradox (fortune.com)
791 points by virgildotcodes 8 days ago | 751 comments on HN | Neutral Editorial · v3.7 ·
Summary Work & Economic Productivity Acknowledges
This Fortune article reports research findings on AI adoption among executives, documenting that most firms (90%) perceive no impact on employment or productivity despite significant investment, mirroring economist Robert Solow's 1970s-1980s observations about the IT productivity paradox. The piece engages with employment security and work technology adoption, presenting data from 6,000 executives across multiple countries and discussing divergent expectations about future economic impacts, while simultaneously implementing comprehensive user surveillance through tracking mechanisms.
Article Heatmap
Preamble: -0.02 — Preamble P Article 1: ND — Freedom, Equality, Brotherhood Article 1: No Data — Freedom, Equality, Brotherhood 1 Article 2: ND — Non-Discrimination Article 2: No Data — Non-Discrimination 2 Article 3: +0.10 — Life, Liberty, Security 3 Article 4: ND — No Slavery Article 4: No Data — No Slavery 4 Article 5: ND — No Torture Article 5: No Data — No Torture 5 Article 6: ND — Legal Personhood Article 6: No Data — Legal Personhood 6 Article 7: ND — Equality Before Law Article 7: No Data — Equality Before Law 7 Article 8: ND — Right to Remedy Article 8: No Data — Right to Remedy 8 Article 9: ND — No Arbitrary Detention Article 9: No Data — No Arbitrary Detention 9 Article 10: ND — Fair Hearing Article 10: No Data — Fair Hearing 10 Article 11: ND — Presumption of Innocence Article 11: No Data — Presumption of Innocence 11 Article 12: -0.40 — Privacy 12 Article 13: ND — Freedom of Movement Article 13: No Data — Freedom of Movement 13 Article 14: ND — Asylum Article 14: No Data — Asylum 14 Article 15: ND — Nationality Article 15: No Data — Nationality 15 Article 16: ND — Marriage & Family Article 16: No Data — Marriage & Family 16 Article 17: ND — Property Article 17: No Data — Property 17 Article 18: ND — Freedom of Thought Article 18: No Data — Freedom of Thought 18 Article 19: +0.14 — Freedom of Expression 19 Article 20: ND — Assembly & Association Article 20: No Data — Assembly & Association 20 Article 21: ND — Political Participation Article 21: No Data — Political Participation 21 Article 22: +0.10 — Social Security 22 Article 23: +0.10 — Work & Equal Pay 23 Article 24: -0.05 — Rest & Leisure 24 Article 25: 0.00 — Standard of Living 25 Article 26: ND — Education Article 26: No Data — Education 26 Article 27: ND — Cultural Participation Article 27: No Data — Cultural Participation 27 Article 28: ND — Social & International Order Article 28: No Data — Social & International Order 28 Article 29: ND — Duties to Community Article 29: No Data — Duties to Community 29 Article 30: ND — No Destruction of Rights Article 30: No Data — No Destruction of Rights 30
Negative Neutral Positive No Data
Aggregates
Weighted Mean -0.02 Unweighted Mean -0.00
Max +0.14 Article 19 Min -0.40 Article 12
Signal 8 No Data 23
Confidence 15% Volatility 0.16 (Medium)
Negative 3 Channels E: 0.6 S: 0.4
SETL +0.13 Editorial-dominant
FW Ratio 63% 26 facts · 15 inferences
Evidence: High: 1 Medium: 6 Low: 1 No Data: 23
Theme Radar
Foundation Security Legal Privacy & Movement Personal Expression Economic & Social Cultural Order & Duties Foundation: -0.02 (1 articles) Security: 0.10 (1 articles) Legal: 0.00 (0 articles) Privacy & Movement: -0.40 (1 articles) Personal: 0.00 (0 articles) Expression: 0.14 (1 articles) Economic & Social: 0.04 (4 articles) Cultural: 0.00 (0 articles) Order & Duties: 0.00 (0 articles)
HN Discussion 20 top-level · 30 replies
DaedalusII 2026-02-18 02:04 UTC link
If you include microsoft copilot trials in fortune 500s, absolutely. A lot of major listed companies are still oblivious to the functionality of AI, their senior management don't even use it out of laziness
bubblewand 2026-02-18 02:14 UTC link
My company’s behind the curve, just got nudged today that I should make sure my AI use numbers aren’t low enough to stand out or I may have a bad time. Reckon we’re minimum six months from “oh whoops that was a waste of money”, maybe even a year. (Unless the AI market very publicly crashes first)
Herring 2026-02-18 02:15 UTC link
My compsci brain suggests large orgs are a distributed system running on faulty hardware (humans) with high network latency (communication). The individual people (CPUs) are plenty fast, we just waste time in meetings, or waiting for approval, or a lot of tasks can't be parallelized, etc. Before upgrading, you need to know if you're I/O Bound vs CPU Bound.
sebmellen 2026-02-18 02:20 UTC link
The thing with a lot of white collar work is that the thinking/talking is often the majority of the work… unlike coding, where thinking is (or, used to be, pre-agent) a smaller percentage of the time consumed. Writing the software, which is essentially working through how to implement the thought, used to take a much larger percentage of the overall time consumed from thought to completion.

Other white collar business/bullshit job (ala Graeber) work is meeting with people, “aligning expectations”, getting consensus, making slides/decks to communicate those thoughts, thinking about market positioning, etc.

Maybe tools like Cowork can help to find files, identify tickets, pull in information, write Excel formulas, etc.

What’s different about coding is no one actually cares about code as output from a business standpoint. The code is the end destination for decided business processes. I think, for that reason, that code is uniquely well adapted to LLM takeover.

But I’m not so sure about other white-collar jobs. If anything, AI tooling just makes everyone move faster. But an LLM automating a new feature release and drafting a press release and hopping on a sales call to sell the product is (IMO) further off than turning a detailed prompt into a fully functional codebase autonomously.

crazygringo 2026-02-18 02:26 UTC link
Just to be clear, the article is NOT criticizing this. To the contrary, it's presenting it as expected, thanks to Solow's productivity paradox [1].

Which is that information technology similarly (and seemingly shockingly) didn't produce any net economic gains in the 1970's or 1980's despite all the computerization. It wasn't until the mid-to-late 1990's that information technology finally started to show clear benefit to the economy overall.

The reason is that investing in IT was very expensive, there were lots of wasted efforts, and it took a long time for the benefits to outweigh the costs across the entire economy.

And so we should expect AI to look the same -- it's helping lots of people, but it's also costing an extraordinary amount of money, and the few people it's helping is currently at least outweighed by the people wasting time with it and its expense. But, we should recognize that it's very early days, and that productivity will rise with time, and costs will come down, as we learn to integrate it with best practices.

[1] https://en.wikipedia.org/wiki/Productivity_paradox

n_u 2026-02-18 02:28 UTC link
Original paper https://www.nber.org/system/files/working_papers/w34836/w348...

Figure A6 on page 45: Current and expected AI adoption by industry

Figure A11 on page 51: Realised and expected impacts of AI on employment by industry

Figure A12 on page 52: Realised and expected impacts of AI on productivity by industry

These seem to roughly line up with my expectations that the more customer facing or physical product your industry is, the lower the usage and impact of AI. (construction, retail)

A little bit surprising is "Accom & Food" being 4th highest for productivity impact in A12. I wonder how they are using it.

J_Shelby_J 2026-02-18 03:20 UTC link
It’s simple calculus for business leaders: admit they’re laying off workers because the fundamentals are bad and spook investors, admit they’re laying off workers because the economy is bad and anger the administration, or just say it’s AI making roles unnecessary and hope for the best.
chrismarlow9 2026-02-18 03:55 UTC link
The slow part as a senior engineer has never been actually writing the code. It has been:

- reviews for code

- asking stakeholders opinions

- SDLC latency (things taking forever to test)

- tickets

- documentations/diagrams

- presentations

Many of these require review. The review hell doesn't magically stop at Open source projects. These things happen internally too.

jurschreuder 2026-02-18 04:15 UTC link
Workers may see the LLM as a productivity boost because they can basically cheat a their homework.

As a CEO I see it as a massive clog up of vast amounts of content that somebody will need to check. A DDoS of any text-based system.

The other day I got a document of 155 pages in Whatsapp. Thanx. Same with pull requests. Who will check all this?

lukaslalinsky 2026-02-18 06:19 UTC link
There was a recent post where someone said AI allows them to start and finish projects. And I find that exactly true. AI agents are helpful for starting proof of concepts. And for doing finishing fixes to an established codebase. For a lot of the work in the middle, I can be still useful, but the developer is more important there.
tabs_or_spaces 2026-02-18 06:21 UTC link
My experience has been

* If I don't know how to do something, llms can get me started really fast. Basically it distills the time taken to research something to a small amount.

* if I know something well, I find myself trying to guide the llm to make the best decisions. I haven't reached the state of completely letting go and trusting the llm yet, because the llm doesn't make good long term decisions

* when working alone, I see the biggest productivity boost in ai and where I can get things done.

* when working in a team, llms are not useful at all and can sometimes be a bottleneck. Not everyone uses llms the same, sharing context as a team is way harder than it should be. People don't want to collaborate. People can't communicate properly.

* so for me, solo engineers or really small teams benefit the most from llms. Larger teams and organizations will struggle because there's simply too much human overheads to overcome. This is currently matching what I'm seeing in posts these days

vagrantstreet 2026-02-18 09:07 UTC link
Perhaps something went wrong along the career path of a developer? Personally during my education there is a severe lack of actual coding done mid lectures, especially any sort of showcase of tools that are available. We didn't even get taught how to use debuggers, I see late year students still struggle how to do basic navigation in a terminal.

And the biggest irony is that the "scariest" projects we had at our university ended up being maybe 500-1000 lines of code, things really must go back to hands on programming with real time feedback from a teacher. LLM's only output what you ask and won't really suggest concepts used by professionals unless you go out of your way to ask for it, it all seems like a vicious cycle even though meaningful code blocks can range along 5 to 100 lines which. When I use LLM's I just get information burn out trying to dig through all that info or code

cadamsdotcom 2026-02-18 09:49 UTC link
Many people are using AI as a slot machine, rerolling repeatedly until they get the result they want..

Once the tools help the AI to get feedback on what its first attempt got right and wrong, then we will see the benefits.

And the models people use en masse - eg. free tier ChatGPT - need to get to some threshold of capability where they’re able to do really well on the tasks they don’t do well enough on today.

There’s a tipping point there where models don’t create more work after they’re used for a task, but we aren’t there yet.

crispyambulance 2026-02-18 11:22 UTC link
I accept that AI-mediated productivity might not be what we expect to be.

But really, are CEO's the best people to assess productivity? What do they _actually_ use to measure it? Annual reviews? GTFO. Perhaps more importantly, it's not like anything a C-level says can ever be taken at face value when it involved their own business.

steveBK123 2026-02-18 12:06 UTC link
I think we are entering the phase where corporate is expecting more ROI than they are getting, but want to remain in the arms race.

The firmwide AI guru at my shop who sends out weekly usage metrics and release notes started mentioning cost only in the last few weeks. At first it was just about engaging with individual business heads on setting budgets / rules and slowing the cost growth rate.

A few weeks later and he is mentioning automated cost reporting, model downgrading and circuit breaking at a per-user level. The daily spend where you immediately get locked within 24 hours is pretty low.

saezbaldo 2026-02-18 14:00 UTC link
One underexplored reason: companies can't give AI agents real authority. The moment an agent needs to do anything beyond summarizing text — update a CRM, transfer funds, modify infrastructure — the security question kills it. No one wants an agent that can take irreversible actions with no approval chain. Until the trust architecture problem is solved, AI stays in read-only mode for most enterprises.
K0balt 2026-02-18 14:04 UTC link
This is because the vast majority of white collar activity in a large corporation produces no direct economic value.

Making it easier/better just means more/higher quality “worthless” work is performed. The incentives in the not-directly -productive parts of organizations are to keep busy and maintain a stream of signals of productivity. For this , AI just raises the bar. The 25% of the work that -is- important to producing economic value just gets reduced to 15%.

The workforce in large orgs that is most AI adjacent is already idling along in terms of production of direct economic value. Making them 10x more productive in nonproductive work will not impact critical metrics in a short timeframe.

It’s worth noting that these “not directly productive” activities actually can (and often do) produce value, eventually. Things like brand identity, culture, and meta-innovation, vision (search-space) are intangibles that present as cost centers but can prove invaluable in longer timescales if done right.

Ithildin 2026-02-18 15:04 UTC link
If I do something faster by pairing with AI, why should my employer reap the benefit? Why would I pass the savings on to my employer?

Could it be that employers are not seeing the difference because most employees are doing something else with the time they've saved by using AI?

There's been massive wage stagnation, benefits are crap, they play games with PTO. Most people I talk to who use AI as a part of their workflow are taking advantage of something nice that has come their way for a change.

yubblegum 2026-02-18 15:59 UTC link
I've sat in a room with a too big to tail banker's VP happily telling me and my boss that "we're getting rid of this whole floor".

Dateline ~2010. Location: NYC Why:Indian outsourced shops.

Now the zinger, dear hn, is this: He actually said to us (we ran a more boutique consulting firm) that "everything has to be done 3 times" and "their work is crap". But "we're getting rid of this floor".

That, imho, was due to geopolitical machinations of inducing India to become part of the West. The immediate equation of "money for quality work" wasn't working but the 'our higher ups' had more grand plans and sacrificing and gutting the IT industry in US was not a problem.

So, given the incentives these days, do not remotely pin your hopes on what these CEOs are saying. It means nothing whatsoever.

jeron 2026-02-18 02:11 UTC link
it turns out it's really hard to get a man to fish with a pole when you don't teach them how to use the reel
chaos_emergent 2026-02-18 02:12 UTC link
100% All of the people who are floored by AI capabilities right now are software engineers, and everyone who's extremely skeptical basically has any other office job. On investigating their primary AI interaction surface, it's Microsoft Co-Pilot, which has to be the absolute shittiest implementation of any AI system so far. As a progress-driven person, it's just super disappointing to see how few people are benefiting from the productive gains of these systems.
lich_king 2026-02-18 02:26 UTC link
> making slides/decks to communicate those thoughts,

That use case is definitely delegated to LLMs by many people. That said, I don't think it translates into linear productivity gains. Most white collar work isn't so fast-paced that if you save an hour making slides, you're going to reap some big productivity benefit. What are you going to do, make five more decks about the same thing? Respond to every email twice? Or just pat yourself on the back and browse Reddit for a while?

It doesn't help that these LLM-generated slides probably contain inaccuracies or other weirdness that someone else will need to fix down the line, so your gains are another person's loss.

amrocha 2026-02-18 02:27 UTC link
Then where are all the amazing open source programs written by individuals by themselves? Where are all the small businesses supposedly assisted by AI?
LPisGood 2026-02-18 02:29 UTC link
I’m confused what kind of software engineer jobs there are that don’t involve meeting with people, “aligning expectations”, getting consensus, making slides/decks to communicate that, thinking about market positioning, etc?

If you weren’t doing much of that before, I struggled to think of how you were doing much engineering at all, save some more niche extremely technical roles where many of those questions were already answered, but even still, I should expect you’re having those kinds of discussions, just more efficiently and with other engineers.

kjellsbells 2026-02-18 02:31 UTC link
Maybe experienced people are the L2 cache? And the challenge is to keep the cache fresh and not too deep. You want institutional memory available quickly (cache hit) to help with whatever your CPU people need at that instant. If you don´t have a cache, you can still solve the problem, but oof, is it gonna take you a long time. OTOH, if you get bad data in the cache, that is not good, as everyone is going be picking that out of the cache instead of really figuring out what to do.
mr_toad 2026-02-18 02:37 UTC link
So management basically have no clue and want you to figure out how to use AI?

Do they also make you write your own performance review and set your own objectives?

kamaal 2026-02-18 02:38 UTC link
One part of the system moving fast doesn't change the speed of the system all that much.

The thing to note is, verifying if something got done is harder and takes time in the same ballpark as doing the work.

If people are serious about AI productivity, lets start by addressing how we can verify program correctness quickly. Everything else is just a Ferrari between two traffic red lights.

kace91 2026-02-18 02:43 UTC link
The comparison seems flawed in terms of cost.

A Claude subscription is 20 bucks per worker if using personal accounts billed to the company, which is not very far from common office tools like slack. Onboarding a worker to Claude or ChatGPT is ridiculously easy compared to teaching a 1970’s manual office worker to use an early computer.

Larger implementations like automating customer service might be more costly, but I think there are enough short term supposed benefits that something should be showing there.

8note 2026-02-18 02:59 UTC link
> unlike coding, where thinking is (or, used to be, pre-agent) a smaller percentage of the time consumed. Writing the software, which is essentially working through how to implement the thought, used to take a much larger percentage of the overall time consumed from thought to completion.

huh? maybe im in the minority, but the thinking:coding has always been 80:20 spend a ton of time thinking and drawing, then write once and debug a bit, and it works

this hasnt really changed with Llm coding either, except that for the same amount of thinking, you get more code output

_aavaa_ 2026-02-18 03:21 UTC link
For more on this exact topic and an answer to Solow’s Paradox, see, the excellent, The Dynamo and the Computer by Paul David [0].

[0]: https://www.almendron.com/tribuna/wp-content/uploads/2018/03...

ozgrakkurt 2026-02-18 03:37 UTC link
Thinking is always the hardest part and the bottleneck for me.

It doesn’t capture everyone’s experience when you say thinking is the smaller part of programming.

I don’t even believe a regular person is capable of producing good quality code without thinking 2x the amount they are coding

bccdee 2026-02-18 04:19 UTC link
There's a lot of rote work in software development that's well-suited to LLM automation, but I think a lot of us overestimate the actual usefulness of a chatbot to the average white-collar worker. What's the point of making Copilot compose an email when your prompt would be longer than the email itself? You can tell ChatGPT to make you a slide deck, but slide decks are already super simple to make. You can use an LLM as a search engine, but we already have search engines. People sometimes talk about using a chatbot to brainstorm, but that seems redundant when you could simply think, free from the burden of explaining yourself to a chatbot.

LLMs are impressive and flexible tools, but people expect them to be transformative, and they're only transformative in narrow ways. The places they shine are quite low-level: transcription, translation, image recognition, search, solving clearly specified problems using well-known APIs, etc. There's value in these, but I'm not seeing the sort of universal accelerant that some people are anticipating.

al_borland 2026-02-18 04:34 UTC link
When my company first started pushing for devs to use AI, the most senior guy on my team was pretty vocal about coding not being the bottleneck that slowed down work. It was an I/O issue, and maybe a caching issue as well from too many projects going at the same time with no focus… which also makes the I/O issues worse.
whynotminot 2026-02-18 04:45 UTC link
It’s also pretty wild to me how people still don’t really even know how to use it.

On hacker news, a very tech literate place, I see people thinking modern AI models can’t generate working code.

The other day in real life I was talking to a friend of mine about ChatGPT. They didn’t know you needed to turn on “thinking” to get higher quality results. This is a technical person who has worked at Amazon.

You can’t expect revolutionary impact while people are still learning how to even use the thing. We’re so early.

beart 2026-02-18 05:01 UTC link
> Who will check all this?

The answer to that, for some, is more AI.

I had a peer explain that the PRs created by AI are now too large and difficult to understand. They were concerned that bugs would crop up after merging the code. Their solution, was to use another AI to review the code... However, this did not solve the problem of not knowing what the code does. They had a solution for that as well... ask AI to prepare a quiz and then deliver it to the engineer to check their understanding of the code.

The question was asked - does using AI mean best-practices should no longer be followed? There were some in the conversation who answered, "probably yes".

> Who will check all this?

So yeah, I think the real answer to that is... no one.

overgard 2026-02-18 05:45 UTC link
> The thing with a lot of white collar work is that the thinking/talking is often the majority of the work… unlike coding, where thinking is (or, used to be, pre-agent) a smaller percentage of the time consumed.

WHOAH WHOAH WHOAH WHOAH STOP. No coder I've ever met has thought that thinking was anything other than the BIGGEST allocation of time when coding. Nobody is putting their typing words-per-minute on their resume because typing has never been the problem.

I'm absolutely baffled that you think the job that requires some of the most thinking, by far, is somehow less cognitively intense than sending emails and making slide decks.

I honestly think a project managers job is actually a lot easier to automate, if you're going to go there (not that I'm hoping for anyone's job to be automated away). It's a lot easier for an engineer to learn the industry and business than it is for a project manager to learn how to keep their vibe code from spilling private keys all over the internet.

thisoneisreal 2026-02-18 06:02 UTC link
Just yesterday one of my junior devs got an 800-line code review from an AI agent. It wasn't all bad, but is this kid literally going to have to read an essay every time he submits code?
aurareturn 2026-02-18 06:32 UTC link
The future of work is fewer human team members and way more AI assistants.

I think companies will need fewer engineers but there will be more companies.

Now: 100 companies who employ 1,000 engineers each

What we are transitioning to: 1000 companies who employ 10 engineers each

What will happen in the future: 10,000 companies who employ 1 engineer each

Same number of engineers.

We are about to enter an era of explosive software production, not from big tech but from small companies. I don't think this will only apply to the software industry. I expect this to apply to every industry.

bazmattaz 2026-02-18 06:45 UTC link
My manager mentioned that his manager (an executive) is not happy because the org we are in are not using as much tokens as other orgs in the company. Pretty wild
__jf__ 2026-02-18 07:03 UTC link
Paul Strassmann wrote a book in 1990 called "Business Value of Computers" that showed that it matters where money on computers is spent. Only firms that spent it on their core business processes showed increased revenues whereas the ones that spent it on peripheral business processes didn't.
notepad0x90 2026-02-18 07:12 UTC link
In my opinion, you're very wrong. There is typically lots of good communication -- one way. The stuff that doesn't get communicated down to worker bees is intentional. "CPUs" aren't all that fast either, unless you make them by providing incentives. if you're a well paid worker who likes their job, i can see why you would think that, but most people aren't that.

Meetings are work, as much as IPC and network calls are work. Just because they're not fun, or what you like to do, it doesn't mean they're any less of a work.

I think you're analyzing things from a tactical perspective, without considering strategic considerations. For example, have you considered that it might not be desirable for CPUs to be just fast, or fast at all? is CISC faster than RISC? different architectural considerations based on different strategic goals right?

If you're an order picker at an amazon warehouse, raw speed is important. being able to execute a simpler and more fixed set of instructions (RISC), and at greater speed is more desirable. if you're an IT worker, less so. IT is generally a cost-center, except for companies that sell IT services or software. if you're in a cost center, then you exist for non-profit-related strategic reasons, such as to help the rest of the company work efficiently, be resilient, compete, be secure. Some people exist in case they're needed some day, others are needed critically but not frequently, yet others are needed frequently but not critically. being able to execute complex and critical tasks reliably and in short order is more desirable for some workers. Being fast in a human context also means being easily bored, or it could mean lots of bullshit work needs to be invented to keep the person busy and happy.

I'd suggest taking that compsci approach but considering not just the varying tasks and workloads, but also the diversity of goals and user cases of users (decision makers/managers in companies). There are deeper topics with regards or strategy and decision making surrounding the state machines of incentives and punishments, and decision maker organization (hierarchical, flat, hub-and-spoke,full-mesh,etc..).

bandrami 2026-02-18 07:19 UTC link
That's probably true for some, but I think a lot of big orgs are simply risk-averse and see AI in general as a giant risk that isn't even fully baked enough to quantify yet. The security and confidentiality issues alone will make Operations hesitant, and Legal probably has some questions about IP (both the risk of a model outputting patented or otherwise protected code, and the huge legal gray area that is the copyrightability of the output of an LLM).

Give it a year or two and let things settle down and (assuming the music is still playing at that time) you might see more dinosaurs start to wander this way.

TimByte 2026-02-18 10:08 UTC link
I suspect the real breakthrough for teams won't be better raw models, but better ways to make the "AI-assisted thinking" legible and shareable across the group, instead of trapped in personal prompt histories
TimByte 2026-02-18 10:11 UTC link
In some cases it might even make the mismatch worse. If one person can produce drafts, specs, or code much faster, you just create more work for reviewers, approvers, and downstream dependencies, which increases queueing
stephenr 2026-02-18 11:18 UTC link
> llms can get me started really fast. Basically it distills the time taken to research something

> the llm doesn't make good long term decisions

What could possibly go wrong, using something you know makes bad decisions, as the basis of your learning something new.

It's like if a dietician instructed a client to go watch McDonald's staff, when they ask how to cook the type of meals that have been recommended.

CSSer 2026-02-18 11:37 UTC link
Figure right after A6 is pretty striking. Ask people if they expect to use AI and a vast majority say yes. Ask if they expect to use AI for specific applications and no more than a third say yes in any industry. That should be telling imo. What we have is a tool that looks impressive to any non-SME for a lot of applications. I would caution against the idea that benefits are obvious.
NoLinkToMe 2026-02-18 11:56 UTC link
The latest company I worked in had your typical fee-earners and fee-burners categories of employees.

The fee-earners had KPIs tied to the sales pipeline, from leads to contracts to work completed on fixed contracts or hours billed on variable-rate contracts. It's relatively easy to measure improvements here. Though it's harder to distill the causes of that and tie it to LLMs.

The fee-burners like in IT, legal, compliance, marketing, finance, typically had KPIs tied to the department objectives. This stuff is a LOT more subjective and a lot more prone to manipulation (goodhart's law). But if you spend 60 hours a week on work in such a department, you tend to have a pretty good idea if things are speeding up or not at all. In a department I was involved in there was a lot of KYC that involved reviewing 300+ pages per case, we tracked case workload per person per day, as well as success rates (percentage of case reviews completed correctly), and could see meaningful changes one could attribute to LLM use.

Agreed though that I'm more interested in a few case studies in detail to understand how they actually measured productivity.

dranudin 2026-02-18 12:41 UTC link
I noticed something similar at my work. The CEO is hyping AI, but at the same time free access to the big models was taken away and rate limits seem to be much tighter..
nutjob2 2026-02-18 13:25 UTC link
To me the biggest benefit of LLMs has always been as a learning tol, be it for general queries or "build this so I can get an idea of how it works and get started quickly". There are so many little things that you need to know when trying anything new.
Editorial Channel
What the content says
+0.20
Article 19 Freedom of Expression
Medium Coverage
Editorial
+0.20
SETL
+0.20

The article reports on economic research from identifiable sources (Financial Times analysis, National Bureau of Economic Research, MIT, Federal Reserve Bank of St. Louis, ManpowerGroup) and presents multiple perspectives from economists, corporate executives, and researchers. The reporting supports public discourse on economic impacts of technology adoption. Attribution and source transparency are evident.

+0.10
Article 3 Life, Liberty, Security
Medium Coverage Framing
Editorial
+0.10
SETL
ND

The article reports that 90% of firms surveyed experienced no impact from AI on employment over three years, finding that contradicts earlier fears of workforce disruption. This data point suggests some protection of employment-based security. However, executive expectations of future employment cuts (0.7%) introduce uncertainty.

+0.10
Article 22 Social Security
Medium Coverage Framing
Editorial
+0.10
SETL
ND

The article reports empirical findings on AI's limited impact on employment: 90% of firms report no employment impact, though 25% of respondents don't use AI at all. About two-thirds of executives use AI but only ~1.5 hours per week. These data points suggest that anticipated workforce disruption has not materialized, providing some protection of employment-based social security. The article also discusses IBM's decision to hire more junior workers to maintain leadership pipeline despite AI capabilities, indicating workforce continuity concerns.

+0.10
Article 23 Work & Equal Pay
Medium Coverage
Editorial
+0.10
SETL
ND

The article discusses work technology adoption, worker perspectives, and labor force impacts. It reports that workers use AI ~1.5 hours per week on average and that worker confidence in AI declined 18% even as usage increased 13%. The article includes worker voice through survey data and references workforce planning decisions (IBM hiring strategy). These elements address work conditions, worker perspectives, and employment practices.

0.00
Preamble Preamble
Medium Framing
Editorial
0.00
SETL
+0.05

The article discusses economic systems and technology adoption indirectly related to human welfare, but does not explicitly engage with human dignity, freedom, or justice principles underlying the UDHR.

0.00
Article 25 Standard of Living
Medium Framing
Editorial
0.00
SETL
ND

The article discusses productivity as a driver of economic growth and, by extension, standard of living. However, the focus is on corporate productivity measures and macroeconomic expectations rather than on actual living standards, poverty reduction, basic needs provision, or distributional equity. The framing assumes productivity gains will translate to improved living standards without addressing how gains are distributed or whether basic needs are met.

-0.05
Article 24 Rest & Leisure
Low Framing
Editorial
-0.05
SETL
ND

The article's primary framing emphasizes productivity as a desirable outcome that AI should deliver. The productivity-focused narrative implicitly values output and efficiency over leisure or work-life balance. However, the reported finding that workers use AI only ~1.5 hours per week suggests minimal work intensification, which moderately mitigates concerns about eroded rest and leisure time.

ND
Article 1 Freedom, Equality, Brotherhood

ND
Article 2 Non-Discrimination

ND
Article 4 No Slavery

ND
Article 5 No Torture

ND
Article 6 Legal Personhood

ND
Article 7 Equality Before Law

ND
Article 8 Right to Remedy

ND
Article 9 No Arbitrary Detention

ND
Article 10 Fair Hearing

ND
Article 11 Presumption of Innocence

ND
Article 12 Privacy
High Practice

The page contains Mixpanel tracking initialized with 'autocapture: true' and 'record_sessions_percent: 100', indicating systematic real-time recording of all user session data and behavioral interactions without explicit granular consent mechanism visible in the content provided.

ND
Article 13 Freedom of Movement

ND
Article 14 Asylum

ND
Article 15 Nationality

ND
Article 16 Marriage & Family

ND
Article 17 Property

ND
Article 18 Freedom of Thought

ND
Article 20 Assembly & Association

ND
Article 21 Political Participation

ND
Article 26 Education

ND
Article 27 Cultural Participation

ND
Article 28 Social & International Order

ND
Article 29 Duties to Community

ND
Article 30 No Destruction of Rights

Structural Channel
What the site does
0.00
Article 19 Freedom of Expression
Medium Coverage
Structural
0.00
Context Modifier
+0.02
SETL
+0.20

Fortune.com's business journalism model provides access to information (though behind paywall), supporting informational accessibility for those with access. The publicly indexed article supports discoverability.

-0.05
Preamble Preamble
Medium Framing
Structural
-0.05
Context Modifier
0.00
SETL
+0.05

Fortune.com's paywall and restricted access partially limits the visibility of this information about economic systems affecting human welfare.

-0.35
Article 12 Privacy
High Practice
Structural
-0.35
Context Modifier
-0.05
SETL
ND

The page contains Mixpanel tracking initialized with 'autocapture: true' and 'record_sessions_percent: 100', indicating systematic real-time recording of all user session data and behavioral interactions without explicit granular consent mechanism visible in the content provided.

ND
Article 1 Freedom, Equality, Brotherhood

ND
Article 2 Non-Discrimination

ND
Article 3 Life, Liberty, Security
Medium Coverage Framing

The article reports that 90% of firms surveyed experienced no impact from AI on employment over three years, finding that contradicts earlier fears of workforce disruption. This data point suggests some protection of employment-based security. However, executive expectations of future employment cuts (0.7%) introduce uncertainty.

ND
Article 4 No Slavery

ND
Article 5 No Torture

ND
Article 6 Legal Personhood

ND
Article 7 Equality Before Law

ND
Article 8 Right to Remedy

ND
Article 9 No Arbitrary Detention

ND
Article 10 Fair Hearing

ND
Article 11 Presumption of Innocence

ND
Article 13 Freedom of Movement

ND
Article 14 Asylum

ND
Article 15 Nationality

ND
Article 16 Marriage & Family

ND
Article 17 Property

ND
Article 18 Freedom of Thought

ND
Article 20 Assembly & Association

ND
Article 21 Political Participation

ND
Article 22 Social Security
Medium Coverage Framing

The article reports empirical findings on AI's limited impact on employment: 90% of firms report no employment impact, though 25% of respondents don't use AI at all. About two-thirds of executives use AI but only ~1.5 hours per week. These data points suggest that anticipated workforce disruption has not materialized, providing some protection of employment-based social security. The article also discusses IBM's decision to hire more junior workers to maintain leadership pipeline despite AI capabilities, indicating workforce continuity concerns.

ND
Article 23 Work & Equal Pay
Medium Coverage

The article discusses work technology adoption, worker perspectives, and labor force impacts. It reports that workers use AI ~1.5 hours per week on average and that worker confidence in AI declined 18% even as usage increased 13%. The article includes worker voice through survey data and references workforce planning decisions (IBM hiring strategy). These elements address work conditions, worker perspectives, and employment practices.

ND
Article 24 Rest & Leisure
Low Framing

The article's primary framing emphasizes productivity as a desirable outcome that AI should deliver. The productivity-focused narrative implicitly values output and efficiency over leisure or work-life balance. However, the reported finding that workers use AI only ~1.5 hours per week suggests minimal work intensification, which moderately mitigates concerns about eroded rest and leisure time.

ND
Article 25 Standard of Living
Medium Framing

The article discusses productivity as a driver of economic growth and, by extension, standard of living. However, the focus is on corporate productivity measures and macroeconomic expectations rather than on actual living standards, poverty reduction, basic needs provision, or distributional equity. The framing assumes productivity gains will translate to improved living standards without addressing how gains are distributed or whether basic needs are met.

ND
Article 26 Education

ND
Article 27 Cultural Participation

ND
Article 28 Social & International Order

ND
Article 29 Duties to Community

ND
Article 30 No Destruction of Rights

Supplementary Signals
Epistemic Quality
0.69
Propaganda Flags
2 techniques detected
repetition
Solow's 1987 quote 'You can see the computer age everywhere but in the productivity statistics' is quoted twice, once in the deck and again in the body, to frame the AI paradox narrative.
causal oversimplification
The article presents the IT productivity paradox resolution (eventual gains in 1990s-2000s) as a historical lesson without fully exploring the conditions, policy decisions, or structural factors that enabled those gains, implying AI may follow the same trajectory.
Solution Orientation
No data
Emotional Tone
No data
Stakeholder Voice
No data
Temporal Framing
No data
Geographic Scope
No data
Complexity
No data
Transparency
No data
Event Timeline 20 events
2026-02-26 12:20 dlq Dead-lettered after 1 attempts: AI adoption and Solow's productivity paradox - -
2026-02-26 12:18 rate_limit OpenRouter rate limited (429) model=llama-3.3-70b - -
2026-02-26 12:17 rate_limit OpenRouter rate limited (429) model=llama-3.3-70b - -
2026-02-26 12:16 rate_limit OpenRouter rate limited (429) model=llama-3.3-70b - -
2026-02-26 10:05 dlq Dead-lettered after 1 attempts: AI adoption and Solow's productivity paradox - -
2026-02-26 10:04 dlq Dead-lettered after 1 attempts: AI adoption and Solow's productivity paradox - -
2026-02-26 10:02 dlq Dead-lettered after 1 attempts: AI adoption and Solow's productivity paradox - -
2026-02-26 10:01 dlq Dead-lettered after 1 attempts: AI adoption and Solow's productivity paradox - -
2026-02-26 10:01 dlq Dead-lettered after 1 attempts: AI adoption and Solow's productivity paradox - -
2026-02-26 09:59 dlq Dead-lettered after 1 attempts: AI adoption and Solow's productivity paradox - -
2026-02-26 09:59 dlq Dead-lettered after 1 attempts: AI adoption and Solow's productivity paradox - -
2026-02-26 09:56 dlq Dead-lettered after 1 attempts: AI adoption and Solow's productivity paradox - -
2026-02-26 09:55 dlq Dead-lettered after 1 attempts: AI adoption and Solow's productivity paradox - -
2026-02-26 09:49 credit_exhausted Credit balance too low, retrying in 248s - -
2026-02-26 09:43 dlq Dead-lettered after 1 attempts: AI adoption and Solow's productivity paradox - -
2026-02-26 09:38 dlq Dead-lettered after 1 attempts: AI adoption and Solow's productivity paradox - -
2026-02-26 09:29 dlq Dead-lettered after 1 attempts: AI adoption and Solow's productivity paradox - -
2026-02-26 09:29 dlq Dead-lettered after 1 attempts: AI adoption and Solow's productivity paradox - -
2026-02-26 09:28 dlq Dead-lettered after 1 attempts: AI adoption and Solow's productivity paradox - -
2026-02-26 09:26 dlq Dead-lettered after 1 attempts: AI adoption and Solow's productivity paradox - -
About HRCB | By Right | HN Guidelines | HN FAQ | Source | UDHR | RSS
build 1686d6e+53hr · deployed 2026-02-26 10:15 UTC · evaluated 2026-02-26 12:13:57 UTC