This Fortune article reports research findings on AI adoption among executives, documenting that most firms (90%) perceive no impact on employment or productivity despite significant investment, mirroring economist Robert Solow's 1970s-1980s observations about the IT productivity paradox. The piece engages with employment security and work technology adoption, presenting data from 6,000 executives across multiple countries and discussing divergent expectations about future economic impacts, while simultaneously implementing comprehensive user surveillance through tracking mechanisms.
If you include microsoft copilot trials in fortune 500s, absolutely. A lot of major listed companies are still oblivious to the functionality of AI, their senior management don't even use it out of laziness
My company’s behind the curve, just got nudged today that I should make sure my AI use numbers aren’t low enough to stand out or I may have a bad time. Reckon we’re minimum six months from “oh whoops that was a waste of money”, maybe even a year. (Unless the AI market very publicly crashes first)
My compsci brain suggests large orgs are a distributed system running on faulty hardware (humans) with high network latency (communication). The individual people (CPUs) are plenty fast, we just waste time in meetings, or waiting for approval, or a lot of tasks can't be parallelized, etc. Before upgrading, you need to know if you're I/O Bound vs CPU Bound.
The thing with a lot of white collar work is that the thinking/talking is often the majority of the work… unlike coding, where thinking is (or, used to be, pre-agent) a smaller percentage of the time consumed. Writing the software, which is essentially working through how to implement the thought, used to take a much larger percentage of the overall time consumed from thought to completion.
Other white collar business/bullshit job (ala Graeber) work is meeting with people, “aligning expectations”, getting consensus, making slides/decks to communicate those thoughts, thinking about market positioning, etc.
Maybe tools like Cowork can help to find files, identify tickets, pull in information, write Excel formulas, etc.
What’s different about coding is no one actually cares about code as output from a business standpoint. The code is the end destination for decided business processes. I think, for that reason, that code is uniquely well adapted to LLM takeover.
But I’m not so sure about other white-collar jobs. If anything, AI tooling just makes everyone move faster. But an LLM automating a new feature release and drafting a press release and hopping on a sales call to sell the product is (IMO) further off than turning a detailed prompt into a fully functional codebase autonomously.
Just to be clear, the article is NOT criticizing this. To the contrary, it's presenting it as expected, thanks to Solow's productivity paradox [1].
Which is that information technology similarly (and seemingly shockingly) didn't produce any net economic gains in the 1970's or 1980's despite all the computerization. It wasn't until the mid-to-late 1990's that information technology finally started to show clear benefit to the economy overall.
The reason is that investing in IT was very expensive, there were lots of wasted efforts, and it took a long time for the benefits to outweigh the costs across the entire economy.
And so we should expect AI to look the same -- it's helping lots of people, but it's also costing an extraordinary amount of money, and the few people it's helping is currently at least outweighed by the people wasting time with it and its expense. But, we should recognize that it's very early days, and that productivity will rise with time, and costs will come down, as we learn to integrate it with best practices.
Figure A6 on page 45: Current and expected AI adoption by industry
Figure A11 on page 51: Realised and expected impacts of AI on employment
by industry
Figure A12 on page 52: Realised and expected impacts of AI on productivity
by industry
These seem to roughly line up with my expectations that the more customer facing or physical product your industry is, the lower the usage and impact of AI. (construction, retail)
A little bit surprising is "Accom & Food" being 4th highest for productivity impact in A12. I wonder how they are using it.
It’s simple calculus for business leaders: admit they’re laying off workers because the fundamentals are bad and spook investors, admit they’re laying off workers because the economy is bad and anger the administration, or just say it’s AI making roles unnecessary and hope for the best.
There was a recent post where someone said AI allows them to start and finish projects. And I find that exactly true. AI agents are helpful for starting proof of concepts. And for doing finishing fixes to an established codebase. For a lot of the work in the middle, I can be still useful, but the developer is more important there.
* If I don't know how to do something, llms can get me started really fast. Basically it distills the time taken to research something to a small amount.
* if I know something well, I find myself trying to guide the llm to make the best decisions. I haven't reached the state of completely letting go and trusting the llm yet, because the llm doesn't make good long term decisions
* when working alone, I see the biggest productivity boost in ai and where I can get things done.
* when working in a team, llms are not useful at all and can sometimes be a bottleneck. Not everyone uses llms the same, sharing context as a team is way harder than it should be. People don't want to collaborate. People can't communicate properly.
* so for me, solo engineers or really small teams benefit the most from llms. Larger teams and organizations will struggle because there's simply too much human overheads to overcome. This is currently matching what I'm seeing in posts these days
Perhaps something went wrong along the career path of a developer? Personally during my education there is a severe lack of actual coding done mid lectures, especially any sort of showcase of tools that are available. We didn't even get taught how to use debuggers, I see late year students still struggle how to do basic navigation in a terminal.
And the biggest irony is that the "scariest" projects we had at our university ended up being maybe 500-1000 lines of code, things really must go back to hands on programming with real time feedback from a teacher. LLM's only output what you ask and won't really suggest concepts used by professionals unless you go out of your way to ask for it, it all seems like a vicious cycle even though meaningful code blocks can range along 5 to 100 lines which. When I use LLM's I just get information burn out trying to dig through all that info or code
Many people are using AI as a slot machine, rerolling repeatedly until they get the result they want..
Once the tools help the AI to get feedback on what its first attempt got right and wrong, then we will see the benefits.
And the models people use en masse - eg. free tier ChatGPT - need to get to some threshold of capability where they’re able to do really well on the tasks they don’t do well enough on today.
There’s a tipping point there where models don’t create more work after they’re used for a task, but we aren’t there yet.
I accept that AI-mediated productivity might not be what we expect to be.
But really, are CEO's the best people to assess productivity? What do they _actually_ use to measure it? Annual reviews? GTFO. Perhaps more importantly, it's not like anything a C-level says can ever be taken at face value when it involved their own business.
I think we are entering the phase where corporate is expecting more ROI than they are getting, but want to remain in the arms race.
The firmwide AI guru at my shop who sends out weekly usage metrics and release notes started mentioning cost only in the last few weeks. At first it was just about engaging with individual business heads on setting budgets / rules and slowing the cost growth rate.
A few weeks later and he is mentioning automated cost reporting, model downgrading and circuit breaking at a per-user level. The daily spend where you immediately get locked within 24 hours is pretty low.
One underexplored reason: companies can't give AI agents real authority. The moment an agent needs to do anything beyond summarizing text — update a CRM, transfer funds, modify infrastructure — the security question kills it. No one wants an agent that can take irreversible actions with no approval chain. Until the trust architecture problem is solved, AI stays in read-only mode for most enterprises.
This is because the vast majority of white collar activity in a large corporation produces no direct economic value.
Making it easier/better just means more/higher quality “worthless” work is performed. The incentives in the not-directly -productive parts of organizations are to keep busy and maintain a stream of signals of productivity. For this , AI just raises the bar. The 25% of the work that -is- important to producing economic value just gets reduced to 15%.
The workforce in large orgs that is most AI adjacent is already idling along in terms of production of direct economic value. Making them 10x more productive in nonproductive work will not impact critical metrics in a short timeframe.
It’s worth noting that these “not directly productive” activities actually can (and often do) produce value, eventually. Things like brand identity, culture, and meta-innovation, vision (search-space) are intangibles that present as cost centers but can prove invaluable in longer timescales if done right.
If I do something faster by pairing with AI, why should my employer reap the benefit? Why would I pass the savings on to my employer?
Could it be that employers are not seeing the difference because most employees are doing something else with the time they've saved by using AI?
There's been massive wage stagnation, benefits are crap, they play games with PTO. Most people I talk to who use AI as a part of their workflow are taking advantage of something nice that has come their way for a change.
Now the zinger, dear hn, is this: He actually said to us (we ran a more boutique consulting firm) that "everything has to be done 3 times" and "their work is crap". But "we're getting rid of this floor".
That, imho, was due to geopolitical machinations of inducing India to become part of the West. The immediate equation of "money for quality work" wasn't working but the 'our higher ups' had more grand plans and sacrificing and gutting the IT industry in US was not a problem.
So, given the incentives these days, do not remotely pin your hopes on what these CEOs are saying. It means nothing whatsoever.
100% All of the people who are floored by AI capabilities right now are software engineers, and everyone who's extremely skeptical basically has any other office job. On investigating their primary AI interaction surface, it's Microsoft Co-Pilot, which has to be the absolute shittiest implementation of any AI system so far. As a progress-driven person, it's just super disappointing to see how few people are benefiting from the productive gains of these systems.
> making slides/decks to communicate those thoughts,
That use case is definitely delegated to LLMs by many people. That said, I don't think it translates into linear productivity gains. Most white collar work isn't so fast-paced that if you save an hour making slides, you're going to reap some big productivity benefit. What are you going to do, make five more decks about the same thing? Respond to every email twice? Or just pat yourself on the back and browse Reddit for a while?
It doesn't help that these LLM-generated slides probably contain inaccuracies or other weirdness that someone else will need to fix down the line, so your gains are another person's loss.
Then where are all the amazing open source programs written by individuals by themselves? Where are all the small businesses supposedly assisted by AI?
I’m confused what kind of software engineer jobs there are that don’t involve meeting with people, “aligning expectations”, getting consensus, making slides/decks to communicate that, thinking about market positioning, etc?
If you weren’t doing much of that before, I struggled to think of how you were doing much engineering at all, save some more niche extremely technical roles where many of those questions were already answered, but even still, I should expect you’re having those kinds of discussions, just more efficiently and with other engineers.
Maybe experienced people are the L2 cache? And the challenge is to keep the cache fresh and not too deep. You want institutional memory available quickly (cache hit) to help with whatever your CPU people need at that instant. If you don´t have a cache, you can still solve the problem, but oof, is it gonna take you a long time. OTOH, if you get bad data in the cache, that is not good, as everyone is going be picking that out of the cache instead of really figuring out what to do.
One part of the system moving fast doesn't change the speed of the system all that much.
The thing to note is, verifying if something got done is harder and takes time in the same ballpark as doing the work.
If people are serious about AI productivity, lets start by addressing how we can verify program correctness quickly. Everything else is just a Ferrari between two traffic red lights.
A Claude subscription is 20 bucks per worker if using personal accounts billed to the company, which is not very far from common office tools like slack. Onboarding a worker to Claude or ChatGPT is ridiculously easy compared to teaching a 1970’s manual office worker to use an early computer.
Larger implementations like automating customer service might be more costly, but I think there are enough short term supposed benefits that something should be showing there.
> unlike coding, where thinking is (or, used to be, pre-agent) a smaller percentage of the time consumed. Writing the software, which is essentially working through how to implement the thought, used to take a much larger percentage of the overall time consumed from thought to completion.
huh? maybe im in the minority, but the thinking:coding has always been 80:20 spend a ton of time thinking and drawing, then write once and debug a bit, and it works
this hasnt really changed with Llm coding either, except that for the same amount of thinking, you get more code output
There's a lot of rote work in software development that's well-suited to LLM automation, but I think a lot of us overestimate the actual usefulness of a chatbot to the average white-collar worker. What's the point of making Copilot compose an email when your prompt would be longer than the email itself? You can tell ChatGPT to make you a slide deck, but slide decks are already super simple to make. You can use an LLM as a search engine, but we already have search engines. People sometimes talk about using a chatbot to brainstorm, but that seems redundant when you could simply think, free from the burden of explaining yourself to a chatbot.
LLMs are impressive and flexible tools, but people expect them to be transformative, and they're only transformative in narrow ways. The places they shine are quite low-level: transcription, translation, image recognition, search, solving clearly specified problems using well-known APIs, etc. There's value in these, but I'm not seeing the sort of universal accelerant that some people are anticipating.
When my company first started pushing for devs to use AI, the most senior guy on my team was pretty vocal about coding not being the bottleneck that slowed down work. It was an I/O issue, and maybe a caching issue as well from too many projects going at the same time with no focus… which also makes the I/O issues worse.
It’s also pretty wild to me how people still don’t really even know how to use it.
On hacker news, a very tech literate place, I see people thinking modern AI models can’t generate working code.
The other day in real life I was talking to a friend of mine about ChatGPT. They didn’t know you needed to turn on “thinking” to get higher quality results. This is a technical person who has worked at Amazon.
You can’t expect revolutionary impact while people are still learning how to even use the thing. We’re so early.
I had a peer explain that the PRs created by AI are now too large and difficult to understand. They were concerned that bugs would crop up after merging the code. Their solution, was to use another AI to review the code... However, this did not solve the problem of not knowing what the code does. They had a solution for that as well... ask AI to prepare a quiz and then deliver it to the engineer to check their understanding of the code.
The question was asked - does using AI mean best-practices should no longer be followed? There were some in the conversation who answered, "probably yes".
> Who will check all this?
So yeah, I think the real answer to that is... no one.
> The thing with a lot of white collar work is that the thinking/talking is often the majority of the work… unlike coding, where thinking is (or, used to be, pre-agent) a smaller percentage of the time consumed.
WHOAH WHOAH WHOAH WHOAH STOP. No coder I've ever met has thought that thinking was anything other than the BIGGEST allocation of time when coding. Nobody is putting their typing words-per-minute on their resume because typing has never been the problem.
I'm absolutely baffled that you think the job that requires some of the most thinking, by far, is somehow less cognitively intense than sending emails and making slide decks.
I honestly think a project managers job is actually a lot easier to automate, if you're going to go there (not that I'm hoping for anyone's job to be automated away). It's a lot easier for an engineer to learn the industry and business than it is for a project manager to learn how to keep their vibe code from spilling private keys all over the internet.
Just yesterday one of my junior devs got an 800-line code review from an AI agent. It wasn't all bad, but is this kid literally going to have to read an essay every time he submits code?
The future of work is fewer human team members and way more AI assistants.
I think companies will need fewer engineers but there will be more companies.
Now: 100 companies who employ 1,000 engineers each
What we are transitioning to: 1000 companies who employ 10 engineers each
What will happen in the future: 10,000 companies who employ 1 engineer each
Same number of engineers.
We are about to enter an era of explosive software production, not from big tech but from small companies. I don't think this will only apply to the software industry. I expect this to apply to every industry.
My manager mentioned that his manager (an executive) is not happy because the org we are in are not using as much tokens as other orgs in the company. Pretty wild
Paul Strassmann wrote a book in 1990 called "Business Value of Computers" that showed that it matters where money on computers is spent. Only firms that spent it on their core business processes showed increased revenues whereas the ones that spent it on peripheral business processes didn't.
In my opinion, you're very wrong. There is typically lots of good communication -- one way. The stuff that doesn't get communicated down to worker bees is intentional. "CPUs" aren't all that fast either, unless you make them by providing incentives. if you're a well paid worker who likes their job, i can see why you would think that, but most people aren't that.
Meetings are work, as much as IPC and network calls are work. Just because they're not fun, or what you like to do, it doesn't mean they're any less of a work.
I think you're analyzing things from a tactical perspective, without considering strategic considerations. For example, have you considered that it might not be desirable for CPUs to be just fast, or fast at all? is CISC faster than RISC? different architectural considerations based on different strategic goals right?
If you're an order picker at an amazon warehouse, raw speed is important. being able to execute a simpler and more fixed set of instructions (RISC), and at greater speed is more desirable. if you're an IT worker, less so. IT is generally a cost-center, except for companies that sell IT services or software. if you're in a cost center, then you exist for non-profit-related strategic reasons, such as to help the rest of the company work efficiently, be resilient, compete, be secure. Some people exist in case they're needed some day, others are needed critically but not frequently, yet others are needed frequently but not critically. being able to execute complex and critical tasks reliably and in short order is more desirable for some workers. Being fast in a human context also means being easily bored, or it could mean lots of bullshit work needs to be invented to keep the person busy and happy.
I'd suggest taking that compsci approach but considering not just the varying tasks and workloads, but also the diversity of goals and user cases of users (decision makers/managers in companies). There are deeper topics with regards or strategy and decision making surrounding the state machines of incentives and punishments, and decision maker organization (hierarchical, flat, hub-and-spoke,full-mesh,etc..).
That's probably true for some, but I think a lot of big orgs are simply risk-averse and see AI in general as a giant risk that isn't even fully baked enough to quantify yet. The security and confidentiality issues alone will make Operations hesitant, and Legal probably has some questions about IP (both the risk of a model outputting patented or otherwise protected code, and the huge legal gray area that is the copyrightability of the output of an LLM).
Give it a year or two and let things settle down and (assuming the music is still playing at that time) you might see more dinosaurs start to wander this way.
I suspect the real breakthrough for teams won't be better raw models, but better ways to make the "AI-assisted thinking" legible and shareable across the group, instead of trapped in personal prompt histories
In some cases it might even make the mismatch worse. If one person can produce drafts, specs, or code much faster, you just create more work for reviewers, approvers, and downstream dependencies, which increases queueing
Figure right after A6 is pretty striking. Ask people if they expect to use AI and a vast majority say yes. Ask if they expect to use AI for specific applications and no more than a third say yes in any industry. That should be telling imo. What we have is a tool that looks impressive to any non-SME for a lot of applications. I would caution against the idea that benefits are obvious.
The latest company I worked in had your typical fee-earners and fee-burners categories of employees.
The fee-earners had KPIs tied to the sales pipeline, from leads to contracts to work completed on fixed contracts or hours billed on variable-rate contracts. It's relatively easy to measure improvements here. Though it's harder to distill the causes of that and tie it to LLMs.
The fee-burners like in IT, legal, compliance, marketing, finance, typically had KPIs tied to the department objectives. This stuff is a LOT more subjective and a lot more prone to manipulation (goodhart's law). But if you spend 60 hours a week on work in such a department, you tend to have a pretty good idea if things are speeding up or not at all. In a department I was involved in there was a lot of KYC that involved reviewing 300+ pages per case, we tracked case workload per person per day, as well as success rates (percentage of case reviews completed correctly), and could see meaningful changes one could attribute to LLM use.
Agreed though that I'm more interested in a few case studies in detail to understand how they actually measured productivity.
I noticed something similar at my work. The CEO is hyping AI, but at the same time free access to the big models was taken away and rate limits seem to be much tighter..
To me the biggest benefit of LLMs has always been as a learning tol, be it for general queries or "build this so I can get an idea of how it works and get started quickly". There are so many little things that you need to know when trying anything new.
Editorial Channel
What the content says
+0.20
Article 19Freedom of Expression
Medium Coverage
Editorial
+0.20
SETL
+0.20
The article reports on economic research from identifiable sources (Financial Times analysis, National Bureau of Economic Research, MIT, Federal Reserve Bank of St. Louis, ManpowerGroup) and presents multiple perspectives from economists, corporate executives, and researchers. The reporting supports public discourse on economic impacts of technology adoption. Attribution and source transparency are evident.
Observable Facts
Article cites specific studies: Financial Times analysis, NBER research on 6,000 executives, MIT studies, Federal Reserve report, ManpowerGroup survey.
Multiple expert perspectives are quoted: economists Torsten Slok, Erik Brynjolfsson, Daron Acemoglu, Mohamed El-Erian, and IBM's Nickle LaMoreaux.
Author is identified as Sasha Rogelberg, Reporter at Fortune.
Publication date and time are provided: February 17, 2026, 1:32 PM ET.
Inferences
The multi-source attribution and expert diversity support informed public discourse on technology and economic policy, advancing Article 19 protections for information access and expression.
The transparent sourcing enables readers to evaluate claims independently and understand the evidence base for economic assertions about AI.
+0.10
Article 3Life, Liberty, Security
Medium Coverage Framing
Editorial
+0.10
SETL
ND
The article reports that 90% of firms surveyed experienced no impact from AI on employment over three years, finding that contradicts earlier fears of workforce disruption. This data point suggests some protection of employment-based security. However, executive expectations of future employment cuts (0.7%) introduce uncertainty.
Observable Facts
Article reports that 'nearly 90% of firms said AI has had no impact on employment or productivity over the last three years.'
Article notes that executives surveyed forecast a 0.7% cut to employment over the next three years, while individual employees surveyed see a 0.5% increase.
Article covers a study of 6,000 executives from firms in U.S., U.K., Germany, and Australia.
Inferences
The finding that AI has not yet disrupted employment suggests that immediate threats to job security and life stability from AI remain limited, aligning with Article 3 protections of employment-based security.
The disparity between executive expectations and employee perceptions suggests uncertainty about future employment security.
+0.10
Article 22Social Security
Medium Coverage Framing
Editorial
+0.10
SETL
ND
The article reports empirical findings on AI's limited impact on employment: 90% of firms report no employment impact, though 25% of respondents don't use AI at all. About two-thirds of executives use AI but only ~1.5 hours per week. These data points suggest that anticipated workforce disruption has not materialized, providing some protection of employment-based social security. The article also discusses IBM's decision to hire more junior workers to maintain leadership pipeline despite AI capabilities, indicating workforce continuity concerns.
Observable Facts
Article states: 'nearly 90% of firms said AI has had no impact on employment or productivity over the last three years.'
Article reports: 'about two-thirds of executives reported using AI, that usage amounted to only about 1.5 hours per week, and 25% of respondents reported not using AI in the workplace at all.'
Article notes IBM's commitment: 'IBM's chief human resources officer, said last week the tech giant would triple its number of young hires, suggesting that despite AI's ability to automate some of the required tasks, displacing entry-level workers would create a dearth of middle managers down the line.'
Study covered 6,000 executives from 'firms who responded to various business outlook surveys in the U.S., U.K., Germany, and Australia.'
Inferences
The finding that AI adoption has not yet disrupted employment protects workers' right to social security through continued employment access, aligning with Article 22 protections.
The low weekly AI usage and significant non-adoption rates suggest that workforce participation and economic security through employment remain substantially intact despite AI investment.
+0.10
Article 23Work & Equal Pay
Medium Coverage
Editorial
+0.10
SETL
ND
The article discusses work technology adoption, worker perspectives, and labor force impacts. It reports that workers use AI ~1.5 hours per week on average and that worker confidence in AI declined 18% even as usage increased 13%. The article includes worker voice through survey data and references workforce planning decisions (IBM hiring strategy). These elements address work conditions, worker perspectives, and employment practices.
Observable Facts
Article cites ManpowerGroup survey: 'workers' regular AI use increased 13% in 2025, but confidence in the technology's utility plummeted 18%, indicating persistent distrust.'
Article reports workers use AI 'only about 1.5 hours per week' on average.
Article discusses IBM's workforce strategy: 'Nickle LaMoreaux, IBM's chief human resources officer, said last week the tech giant would triple its number of young hires, suggesting that despite AI's ability to automate some of the required tasks, displacing entry-level workers would create a dearth of middle managers down the line.'
Data covers 'nearly 14,000 workers in 19 countries' in ManpowerGroup study.
Inferences
The inclusion of worker perspectives (trust/confidence data) and workplace usage patterns shows attention to workers' lived experience with AI technology, supporting Article 23 protections of work conditions.
The declining worker confidence despite increased adoption suggests workers retain skepticism about AI implementation, indicating their agency in evaluating workplace technology.
0.00
PreamblePreamble
Medium Framing
Editorial
0.00
SETL
+0.05
The article discusses economic systems and technology adoption indirectly related to human welfare, but does not explicitly engage with human dignity, freedom, or justice principles underlying the UDHR.
Observable Facts
The article discusses economic productivity and employment as systemic concerns affecting worker welfare.
Access to the full article is restricted by paywall model on fortune.com.
Inferences
The framing of economic productivity as a measure of societal wellbeing relates tangentially to the Preamble's concern for human dignity and standards of living.
0.00
Article 25Standard of Living
Medium Framing
Editorial
0.00
SETL
ND
The article discusses productivity as a driver of economic growth and, by extension, standard of living. However, the focus is on corporate productivity measures and macroeconomic expectations rather than on actual living standards, poverty reduction, basic needs provision, or distributional equity. The framing assumes productivity gains will translate to improved living standards without addressing how gains are distributed or whether basic needs are met.
Observable Facts
Article discusses productivity growth rates and GDP tracking: 'fourth-quarter GDP was tracking up 3.7%' and 'U.S. productivity jump of 2.7% last year.'
Article frames productivity as central to economic wellbeing: economists discuss it as key metric for economic health.
Article notes executives forecast 'AI will increase productivity by 1.4% and increase output by 0.8% over the next three years.'
Inferences
The article engages with productivity as a proxy for living standards but does not address distributional questions or actual outcomes for workers' standards of living.
The focus on productivity metrics rather than standards of living outcomes represents an indirect approach to Article 25 concerns.
-0.05
Article 24Rest & Leisure
Low Framing
Editorial
-0.05
SETL
ND
The article's primary framing emphasizes productivity as a desirable outcome that AI should deliver. The productivity-focused narrative implicitly values output and efficiency over leisure or work-life balance. However, the reported finding that workers use AI only ~1.5 hours per week suggests minimal work intensification, which moderately mitigates concerns about eroded rest and leisure time.
Observable Facts
Article emphasizes disappointment that 'nearly 90% of firms said AI has had no impact on employment or productivity' and frames the lack of productivity gains as a problem to be solved.
Article reports workers use AI 'only about 1.5 hours per week,' suggesting limited workplace time devoted to AI tasks.
The article's primary concern is the absence of productivity gains, not worker wellbeing or rest time.
Inferences
The emphasis on productivity as a desired metric reflects an implicit economic framing that prioritizes output over rest, potentially misaligning with Article 24 protections of leisure and reasonable working hours.
The low actual AI usage partially protects workers from work intensification, moderating the negative implications of the productivity-focused framing.
ND
Article 1Freedom, Equality, Brotherhood
ND
Article 2Non-Discrimination
ND
Article 4No Slavery
ND
Article 5No Torture
ND
Article 6Legal Personhood
ND
Article 7Equality Before Law
ND
Article 8Right to Remedy
ND
Article 9No Arbitrary Detention
ND
Article 10Fair Hearing
ND
Article 11Presumption of Innocence
ND
Article 12Privacy
High Practice
The page contains Mixpanel tracking initialized with 'autocapture: true' and 'record_sessions_percent: 100', indicating systematic real-time recording of all user session data and behavioral interactions without explicit granular consent mechanism visible in the content provided.
Mixpanel configuration enables autocapture of all user interactions and records 100% of sessions.
No explicit user opt-in or detailed consent banner for session recording is visible in the provided content.
Inferences
The comprehensive session recording and autocapture configuration represents substantial surveillance of user behavior, conflicting with privacy expectations and data minimization principles underlying Article 12.
The tracking occurs by default without explicit per-interaction consent, suggesting privacy rights are subordinated to analytics interests.
ND
Article 13Freedom of Movement
ND
Article 14Asylum
ND
Article 15Nationality
ND
Article 16Marriage & Family
ND
Article 17Property
ND
Article 18Freedom of Thought
ND
Article 20Assembly & Association
ND
Article 21Political Participation
ND
Article 26Education
ND
Article 27Cultural Participation
ND
Article 28Social & International Order
ND
Article 29Duties to Community
ND
Article 30No Destruction of Rights
Structural Channel
What the site does
0.00
Article 19Freedom of Expression
Medium Coverage
Structural
0.00
Context Modifier
+0.02
SETL
+0.20
Fortune.com's business journalism model provides access to information (though behind paywall), supporting informational accessibility for those with access. The publicly indexed article supports discoverability.
-0.05
PreamblePreamble
Medium Framing
Structural
-0.05
Context Modifier
0.00
SETL
+0.05
Fortune.com's paywall and restricted access partially limits the visibility of this information about economic systems affecting human welfare.
-0.35
Article 12Privacy
High Practice
Structural
-0.35
Context Modifier
-0.05
SETL
ND
The page contains Mixpanel tracking initialized with 'autocapture: true' and 'record_sessions_percent: 100', indicating systematic real-time recording of all user session data and behavioral interactions without explicit granular consent mechanism visible in the content provided.
ND
Article 1Freedom, Equality, Brotherhood
ND
Article 2Non-Discrimination
ND
Article 3Life, Liberty, Security
Medium Coverage Framing
The article reports that 90% of firms surveyed experienced no impact from AI on employment over three years, finding that contradicts earlier fears of workforce disruption. This data point suggests some protection of employment-based security. However, executive expectations of future employment cuts (0.7%) introduce uncertainty.
ND
Article 4No Slavery
ND
Article 5No Torture
ND
Article 6Legal Personhood
ND
Article 7Equality Before Law
ND
Article 8Right to Remedy
ND
Article 9No Arbitrary Detention
ND
Article 10Fair Hearing
ND
Article 11Presumption of Innocence
ND
Article 13Freedom of Movement
ND
Article 14Asylum
ND
Article 15Nationality
ND
Article 16Marriage & Family
ND
Article 17Property
ND
Article 18Freedom of Thought
ND
Article 20Assembly & Association
ND
Article 21Political Participation
ND
Article 22Social Security
Medium Coverage Framing
The article reports empirical findings on AI's limited impact on employment: 90% of firms report no employment impact, though 25% of respondents don't use AI at all. About two-thirds of executives use AI but only ~1.5 hours per week. These data points suggest that anticipated workforce disruption has not materialized, providing some protection of employment-based social security. The article also discusses IBM's decision to hire more junior workers to maintain leadership pipeline despite AI capabilities, indicating workforce continuity concerns.
ND
Article 23Work & Equal Pay
Medium Coverage
The article discusses work technology adoption, worker perspectives, and labor force impacts. It reports that workers use AI ~1.5 hours per week on average and that worker confidence in AI declined 18% even as usage increased 13%. The article includes worker voice through survey data and references workforce planning decisions (IBM hiring strategy). These elements address work conditions, worker perspectives, and employment practices.
ND
Article 24Rest & Leisure
Low Framing
The article's primary framing emphasizes productivity as a desirable outcome that AI should deliver. The productivity-focused narrative implicitly values output and efficiency over leisure or work-life balance. However, the reported finding that workers use AI only ~1.5 hours per week suggests minimal work intensification, which moderately mitigates concerns about eroded rest and leisure time.
ND
Article 25Standard of Living
Medium Framing
The article discusses productivity as a driver of economic growth and, by extension, standard of living. However, the focus is on corporate productivity measures and macroeconomic expectations rather than on actual living standards, poverty reduction, basic needs provision, or distributional equity. The framing assumes productivity gains will translate to improved living standards without addressing how gains are distributed or whether basic needs are met.
ND
Article 26Education
ND
Article 27Cultural Participation
ND
Article 28Social & International Order
ND
Article 29Duties to Community
ND
Article 30No Destruction of Rights
Supplementary Signals
Epistemic Quality
0.69
Propaganda Flags
2techniques detected
repetition
Solow's 1987 quote 'You can see the computer age everywhere but in the productivity statistics' is quoted twice, once in the deck and again in the body, to frame the AI paradox narrative.
causal oversimplification
The article presents the IT productivity paradox resolution (eventual gains in 1990s-2000s) as a historical lesson without fully exploring the conditions, policy decisions, or structural factors that enabled those gains, implying AI may follow the same trajectory.
Solution Orientation
No data
Emotional Tone
No data
Stakeholder Voice
No data
Temporal Framing
No data
Geographic Scope
No data
Complexity
No data
Transparency
No data
Event Timeline
20 events
2026-02-26 12:20
dlq
Dead-lettered after 1 attempts: AI adoption and Solow's productivity paradox
--
2026-02-26 12:18
rate_limit
OpenRouter rate limited (429) model=llama-3.3-70b
--
2026-02-26 12:17
rate_limit
OpenRouter rate limited (429) model=llama-3.3-70b
--
2026-02-26 12:16
rate_limit
OpenRouter rate limited (429) model=llama-3.3-70b
--
2026-02-26 10:05
dlq
Dead-lettered after 1 attempts: AI adoption and Solow's productivity paradox
--
2026-02-26 10:04
dlq
Dead-lettered after 1 attempts: AI adoption and Solow's productivity paradox
--
2026-02-26 10:02
dlq
Dead-lettered after 1 attempts: AI adoption and Solow's productivity paradox
--
2026-02-26 10:01
dlq
Dead-lettered after 1 attempts: AI adoption and Solow's productivity paradox
--
2026-02-26 10:01
dlq
Dead-lettered after 1 attempts: AI adoption and Solow's productivity paradox
--
2026-02-26 09:59
dlq
Dead-lettered after 1 attempts: AI adoption and Solow's productivity paradox
--
2026-02-26 09:59
dlq
Dead-lettered after 1 attempts: AI adoption and Solow's productivity paradox
--
2026-02-26 09:56
dlq
Dead-lettered after 1 attempts: AI adoption and Solow's productivity paradox
--
2026-02-26 09:55
dlq
Dead-lettered after 1 attempts: AI adoption and Solow's productivity paradox
--
2026-02-26 09:49
credit_exhausted
Credit balance too low, retrying in 248s
--
2026-02-26 09:43
dlq
Dead-lettered after 1 attempts: AI adoption and Solow's productivity paradox
--
2026-02-26 09:38
dlq
Dead-lettered after 1 attempts: AI adoption and Solow's productivity paradox
--
2026-02-26 09:29
dlq
Dead-lettered after 1 attempts: AI adoption and Solow's productivity paradox
--
2026-02-26 09:29
dlq
Dead-lettered after 1 attempts: AI adoption and Solow's productivity paradox
--
2026-02-26 09:28
dlq
Dead-lettered after 1 attempts: AI adoption and Solow's productivity paradox
--
2026-02-26 09:26
dlq
Dead-lettered after 1 attempts: AI adoption and Solow's productivity paradox