H
HN HRCB top | past | comments | ask | show | jobs | articles | domains | dashboard | seldon | network | factions | velocity | about hrcb
home / theguardian.com / item 27413706
+0.50 Microsoft’s Kate Crawford: ‘AI is neither artificial nor intelligent’ (theguardian.com)
156 points by 9wzYQbTYsAIc 1725 days ago | 87 comments on HN | Moderate positive Editorial · v3.7 · 2026-02-26
Summary AI Power & Algorithmic Justice Advocates
This Guardian interview with AI researcher Kate Crawford advocates for critical understanding of artificial intelligence systems as sociotechnical assemblies driven by natural resource extraction and human labor, which concentrate power in corporations, militaries, and police while encoding discriminatory stereotypes. The editorial structure amplifies Crawford's voice through free access and prominent publication, demonstrating institutional commitment to technology criticism and public discourse on algorithmic justice. However, structural tension exists: the platform simultaneously implements extensive third-party tracking and advertising surveillance that contradicts the article's critique of AI-enabled institutional surveillance.
Article Heatmap
Preamble: +0.63 — Preamble P Article 1: +0.46 — Freedom, Equality, Brotherhood 1 Article 2: +0.36 — Non-Discrimination 2 Article 3: ND — Life, Liberty, Security Article 3: No Data — Life, Liberty, Security 3 Article 4: ND — No Slavery Article 4: No Data — No Slavery 4 Article 5: ND — No Torture Article 5: No Data — No Torture 5 Article 6: ND — Legal Personhood Article 6: No Data — Legal Personhood 6 Article 7: ND — Equality Before Law Article 7: No Data — Equality Before Law 7 Article 8: ND — Right to Remedy Article 8: No Data — Right to Remedy 8 Article 9: ND — No Arbitrary Detention Article 9: No Data — No Arbitrary Detention 9 Article 10: ND — Fair Hearing Article 10: No Data — Fair Hearing 10 Article 11: ND — Presumption of Innocence Article 11: No Data — Presumption of Innocence 11 Article 12: +0.07 — Privacy 12 Article 13: ND — Freedom of Movement Article 13: No Data — Freedom of Movement 13 Article 14: ND — Asylum Article 14: No Data — Asylum 14 Article 15: ND — Nationality Article 15: No Data — Nationality 15 Article 16: ND — Marriage & Family Article 16: No Data — Marriage & Family 16 Article 17: +0.46 — Property 17 Article 18: ND — Freedom of Thought Article 18: No Data — Freedom of Thought 18 Article 19: +0.91 — Freedom of Expression 19 Article 20: +0.36 — Assembly & Association 20 Article 21: ND — Political Participation Article 21: No Data — Political Participation 21 Article 22: ND — Social Security Article 22: No Data — Social Security 22 Article 23: +0.48 — Work & Equal Pay 23 Article 24: ND — Rest & Leisure Article 24: No Data — Rest & Leisure 24 Article 25: +0.54 — Standard of Living 25 Article 26: +0.64 — Education 26 Article 27: +0.66 — Cultural Participation 27 Article 28: +0.38 — Social & International Order 28 Article 29: +0.26 — Duties to Community 29 Article 30: ND — No Destruction of Rights Article 30: No Data — No Destruction of Rights 30
Negative Neutral Positive No Data
Aggregates
Weighted Mean +0.50 Unweighted Mean +0.48
Max +0.91 Article 19 Min +0.07 Article 12
Signal 13 No Data 18
Confidence 25% Volatility 0.20 (Medium)
Negative 0 Channels E: 0.6 S: 0.4
SETL +0.20 Editorial-dominant
FW Ratio 58% 33 facts · 24 inferences
Evidence: High: 1 Medium: 11 Low: 1 No Data: 18
Theme Radar
Foundation Security Legal Privacy & Movement Personal Expression Economic & Social Cultural Order & Duties Foundation: 0.48 (3 articles) Security: 0.00 (0 articles) Legal: 0.00 (0 articles) Privacy & Movement: 0.07 (1 articles) Personal: 0.46 (1 articles) Expression: 0.64 (2 articles) Economic & Social: 0.51 (2 articles) Cultural: 0.65 (2 articles) Order & Duties: 0.32 (2 articles)
HN Discussion 16 top-level · 24 replies
arcanus 2021-06-06 15:57 UTC link
"[Microsoft] Unusually, over its 30-year history, it has hired social scientists to look critically at how technologies are being built. Being on the inside, we are often able to see downsides early before systems are widely deployed. My book did not go through any pre-publication review – Microsoft Research does not require that – and my lab leaders support asking hard questions, even if the answers involve a critical assessment of current technological practices."

Interesting to note she opens with a veiled attack at Google's internal review practices. For those not aware: https://www.google.com/amp/s/www.theverge.com/platform/amp/2...

rektide 2021-06-06 16:07 UTC link
James Martin called it true in 2001 when he described the coming machine learned systems Alien Intelligences. unintelligible inhuman systems. from After the Internet: Alien Intelligence.
stephc_int13 2021-06-06 16:18 UTC link
I agree with most of the content of this article.

Especially this part:

"Ethics are necessary, but not sufficient. More helpful are questions such as, who benefits and who is harmed by this AI system? And does it put power in the hands of the already powerful?"

Distribution of power is the most important political question, more important than distribution of wealth or what we call ethics, that is always biased and hard to measure.

ThalesX 2021-06-06 16:24 UTC link
> But from the beginning there was pushback and more recent work shows there is no reliable correlation between expressions on the face and what we are actually feeling. And yet we have tech companies saying emotions can be extracted simply by looking at video of people’s faces. We’re even seeing it built into car software systems.

I've been involved with a startup where the CEO was certain we're only a step away from replacing humans in recruiting healthcare workers with AI analyzing expressions in self-submitted interview videos, as well as analysis of quizzes. I got pushed aside from that startup for being the 'obnixous technical humanist' (which is a caracterization I wear proudly).

> AI is neither artificial nor intelligent. It is made from natural resources and it is people who are performing the tasks to make the systems appear autonomous.

I've argued as a technological consultant times and times again that the effort they are spending on automating humans out, would be better spent in making those humans happier and augumenting them with an automated system. The fact that the CEO saw his low level employees as just temporary assets ready to be replaced meant that their life in the company was miserable.

> This April, the EU produced the first draft omnibus regulations for AI

I digress. I'm wondering what they're plan of action is since it was a European startup and since April we're theoretically pushed away from using AI to gauge candidates and all of their VC investment money came with the promise of 'automating healthcare recruitment and scaling globally'.

rgbrenner 2021-06-06 16:37 UTC link
AI is neither artificial nor intelligent. It is made from natural resources and it is people who are performing the tasks to make the systems appear autonomous.

Making something from natural resources does not make it natural. If that were the case, we wouldn't have the word "artificial", since everything we make comes from natural elements and everything would be "natural". The fact that you took natural resources and built something else from it--that it didn't exist in nature already--makes it artificial. AI is definitely artificial.

smoldesu 2021-06-06 16:39 UTC link
Honestly the hardest part about integrating AI into our daily lives is the fact that we don't really have AI yet. We have made great advances in the fields of machine learning and neural networks, but actually getting a computer to make educated decisions is hard. The current issue is that all of these models are black boxes: an ML model can guess which data will come out, but it can't really fully understand what it knows. It can identify slightly harder to notice patterns, but it can't actually think.

We adopted the phrase "AI" far too early in the field, and I suspect there will be another few decades before we have the technical and scientific capability to make real artifical intelligence a thing.

jjcon 2021-06-06 16:41 UTC link
> AI is neither artificial nor intelligent. It is made from natural resources and it is people who are performing the tasks to make the systems appear autonomous.

People performing the tasks? As in coding tasks? That’s kinda the definition of artificial. If AI spawned naturally… that’s what wouldn’t be artificial.

Or is she referring to data labeling and forgetting the many non supervised areas of AI? (Not that labeling would make it less artificial)

Or is she suggesting that anything made from natural resources is natural… is anything not made from natural resources? The reason for artificial in artificial intelligence is to juxtapose against biological intelligence not speak to resource usage.

I think arguing on the ‘not intelligent’ front is fine but that’s just kind of a word game. The field seeks intelligence and is fine making stupid ai if it means better ai later. Unless she is getting into notions of a soul or some bs about consciousness in which case we have left the realm of science altogether.

Any way you slice it seems more like an inane point for making a headline than one for substantive discussion.

mjburgess 2021-06-06 16:55 UTC link
> Time and again, we see these systems producing errors ... and the response has been: “We just need more data.” but... you start to see forms of discrimination... in how they are built and trained to see the world.

Thank goodness this perspective is getting out there.

I have recently been incensed by the opposite view, that the bias is within "the data" only: https://www.youtube.com/watch?v=6jbin15-TcY .

This is wholly false. Machines which analyse the world (ie., actual physical stuff, eg., people) in terms of statistical co-occurances within datasets cannot acquire the relevant understanding of the world.

Consider NLP. It is likely that an NLP system analysing volumes of work on minority political causes will associate minority identifiers (eg., "black") with negative terms ("oppressed", "hostile", "against", "antagonistic"), etc. And thereby introduce an association which is not present within the text.

This is because conceptual association is not statistical association. In such texts the conceptual association is "standing for", "opposing", "suffering from", "in need of". Not "likely to occur with".

There are entire fields sold on a false equivocation between conceptual and statistical association. This equivocation generates novel unethical systems.

AI systems are not mere symptoms of their data. They are unable, by design, to understand the data; and repeat it as-if it were a mere symptom of wordly co-occurance.

ltbarcly3 2021-06-06 17:33 UTC link
Ahh so it is naturally occurring without human artifice non-intelligence. Got it. NOWHANI
eunos 2021-06-06 17:55 UTC link
It should not matter whether it is artificial or intelligence or neither, as long as it get the jobs done.
version_five 2021-06-06 18:15 UTC link
> ImageNet has now removed many of the obviously problematic people categories – certainly an improvement – however, the problem persists because these training sets still circulate on torrent sites [where files are shared between peers].

This is the scariest part of the article. The idea that some central authority should be censoring and revising datasets to keep up with political orthodoxy, and we should be rooting out unauthorized torrent sharing of unapproved training data.

From a technical point of view, the common reason we pre-traini on imagenet is as the starting point for fine tuning for a specific use case. The diversity and size of the dataset makes good generic feature extractors. If you're using a ML model to identify people as kleptomaniac or drug dealer or other "problematic" labels, you're working on some kind of phrenology and it doesnt take an "AI ethicist" to know you shouldn't do it. But that's not the same as pretraining on imagenet, and certainly doesn't support trying to make datasets align with today's political orthodoxy.

Husafan 2021-06-06 18:29 UTC link
Discuss amongst yourselves.
t_von_doom 2021-06-06 18:42 UTC link
> The idea that you can see from somebody’s face what they are feeling is deeply flawed. I don’t think that’s possible.

I agree with most of the article but this point I disagree with. Non verbal communication is a huge component in how we interact with each other.

Covid example: When on calls where no cameras are on I get far less feedback when presenting to a 'room' compared to if I was to present in person or have cams on, even if the room stay silent in both situations.

By looking at faces I can see who is distracted, who looks confused and whether what I am saying is being received well or poorly. Did that joke get smiles (polite or genuine ones?) or eye rolls?

Now - do I think the current SOTA algorithms are at this level of nuance? No, definitely not. But to say it isn't at all possible is ridiculous in my opinion

slver 2021-06-06 19:10 UTC link
Even in situations where there's an objective reason for AI performing poorly with certain groups (like the worse light contrast on facial details with many photos of dark-skinned people), we go back to default and seek to cast moral judgment on someone for being racist, sexist, or something of the sort. Because you can't blame AI, we blame the people programming it, collecting data for it and training it.

It seems we're trying to prevent a full-on moral panic that would be caused by the realization that "discrimination" is an objective need by the objective nature of the problem in some cases.

For example there are about 14% black people in the US. If you have a FAIR SET OF DATA to train from, 14% of those faces would be black. No, they're not underrepresented, they're literally accurately represented. But this means the AI will be worse with those people. So what do we do? If we artificially up the "quota" and train with 50% black faces, the AI will now underperform with non-black faces.

So then you need to train two networks: black faces and non-black faces, and then you need to "discriminate" between both, and pick the right network depending on the race you're working with.

Math is racist sometimes.

yarg 2021-06-06 23:43 UTC link
Yes it is, and of course it's not - it's artificial.

If it were intelligent we'd be calling it "synthetic".

mark_l_watson 2021-06-06 23:43 UTC link
I am reading her book and so far, I am enjoying it. The book really made me think of externalities for tech that I use and earn a living with. Costs of these externalities are environmental and human suffering. Recommended.
stephc_int13 2021-06-06 16:08 UTC link
No so veiled, and fully warranted, IMHO.
ocdtrekkie 2021-06-06 16:28 UTC link
My first thought was that even giving this brief interview would be a fireable offense for Google's AI Ethics team members.
dec0dedab0de 2021-06-06 16:37 UTC link
I agree, but wouldn't those questions, or atleast any opinions on the answers to those questions, just be part of ethics?
spoonjim 2021-06-06 16:39 UTC link
But anything that benefits Microsoft, one of the most powerful companies in the world, is necessarily putting more power in the hands of the already powerful.
otabdeveloper4 2021-06-06 16:45 UTC link
> The current issue is that all of these models are black boxes

Not quite. It'd be great if that was the only issue with neural network ML. A much bigger issue is that neural networks have extremely limited applicability. They're great for classification problems when you can have huge training datasets, but for many common problems they're useless.

One obvious class of problems is time series prediction - extremely important in life and for business, and something neural networks are no good for.

jcims 2021-06-06 16:56 UTC link
I had the exact same reaction, but I wonder if I'm too close to it to appreciate what the term might conjure up in the broader (voting) public that isn't in the industry. (FWIW I kind of like the term synthetic intelligence as an alternative.)

I do agree wholeheartedly that, much as in the case of cryptocurrency, there isn't an intuitive link between the product and the resources it requires to create and operate. The fact that GPT-3 would cost millions to reproduce in the commercial market doesn't even compute for me.

thibautg 2021-06-06 16:57 UTC link
"Approximate Imitation"?
throwaway3699 2021-06-06 17:01 UTC link
I don't know, inference from data is literally how all decisions are fundamentally made. Why wouldn't it be possible to create models that learn this particular pattern?
js2 2021-06-06 17:03 UTC link
Cybotron5000 2021-06-06 17:06 UTC link
It seems to me somewhat similar to an argument put forward by Jaron Lanier: that what is termed artificial intelligence’s (or ‘deep learning’’s etc.) current successes, at least as far as something like the translation services of the big corporations for example, is actually built by leveraging corpuses that are the fruits of a huge amount of individual human effort. Sometimes, say with the ‘Mechanical Turk’ thing that Amazon has, or CAPTCHAs or something, this is a more explicit connection (…bit Wizard of Oz! :) As I understand it, he proposes that an alternative to UBI etc. might be micro-compensations (or transactions) in return for providing this data. This might be a prelude/transition to a stage where our basic needs (on a sort of Mazlow’s hierarchy) were met by A.I., or that there might be increasingly creative or interesting ways to complete some tasks that we could go on refining forever.
efnx 2021-06-06 17:08 UTC link
I’m not arguing with your point, but let me offer another thought. The crux of this problem as I see it is that people consider themselves as separate from nature. The truth is that we are part of nature and the things we make are still part of nature. The opposite of “natural” is not “artificial” or “man-made” - it’s “supernatural”!

Of course the word “artificial” is useful to classify things for our safety and benefit, but we are not supernatural and so the things we create still exist in nature - like we are the hand of the universe reconfiguring itself.

This artificial separation of the human from nature has been popular in the past and I hope we overcome it.

antonzabirko 2021-06-06 17:31 UTC link
Good, glad google is facing at least some sort of consequences as minor as they might be. It might even end up snowballing against them.
grawprog 2021-06-06 17:42 UTC link
That's because that was an inane point used as a headline that didn't really have much to do with what the article is actually about.

It's a criticism of the training data sets used on ai's that were manually tagged by individuals that's led to some extreme biases in ai behaviour resulting in real world consequences.

For a great real world example of this happening right now, there's a fairly large scandal and a whole bunch of angry people over the problems caused by latitude's choice of training material for their ai driven text adventure game.

https://gitgud.io/AuroraPurgatio/aurorapurgatio

aarch64 2021-06-06 19:00 UTC link
Kai-Fu Lee and David Siegel talked about AI in April 2019 and mentioned:

> It's neither artificial nor intelligent

Quote is at 3:40 (topic starts around 3:05) in the YouTube video linked at the end of the article:

https://www.twosigma.com/articles/ai-past-and-future-a-conve...

cocoafleck 2021-06-06 19:11 UTC link
I believe that people (usually) choose to communicate with faces, but just as people can lie with words they can lie with faces.
TimPC 2021-06-06 19:24 UTC link
I think it’s impossible for a human to read a mainstream body of minority political work and not come out with an association between black and oppressed. The entire dominant narrative is that all minority groups are oppressed. That association is definitely present in the text. Maybe it’s the case that we need to explicitly remove all negative associations for things like skin colour (potentially a hard problem in its own right) to generate more egalitarian text. But it’s not merely a matter of AI getting things wrong some negative associations are actually present in the text.
sanity31415 2021-06-06 19:38 UTC link
It sucks when facts get in the way of a fashionable narrative, but the Google AI researcher was fired for demanding the names of an internal review panel who rejected her paper. She had a reputation for accusing her colleagues of bigotry and other toxic behavior.

The r/machinelearning thread provides a more balanced perspective: https://www.reddit.com/r/MachineLearning/comments/k77sxz/d_t...

Aunche 2021-06-06 20:35 UTC link
>Being on the inside, we are often able to see downsides early before systems are widely deployed.

Well they failed to catch that Microsoft Tae would be instantly turned into a internet Nazi bot.

tracnar 2021-06-06 21:10 UTC link
To me she is referring to what she says previously:

> Also, systems might seem automated but when we pull away the curtain we see large amounts of low paid labour, everything from crowd work categorising data to the never-ending toil of shuffling Amazon boxes.

So the artificial and intelligent part is a tiny piece of the task, most of it is done through humans and logistics.

simonswords82 2021-06-06 21:44 UTC link
Except when people are using the term AI to sell nonsense that doesn't do what it says on the tin.
NoPicklez 2021-06-07 01:12 UTC link
I think a better term to describe it is synthetic, which is something man made. Synthetic intelligence kinda has a nice ring to it as well.

But I agree with you, if we had it their way we would have "artificial sweeteners" but "natural sweeteners".

rafaelero 2021-06-07 02:56 UTC link
> This is because conceptual association is not statistical association. In such texts the conceptual association is "standing for", "opposing", "suffering from", "in need of". Not "likely to occur with".

The better GPT gets, the wronger you will probably be. Why a machine wouldn't be able to abstract conceptual associations from a statistical framework?

throwaway3699 2021-06-07 07:08 UTC link
Race aside, you can make this argument for any social strata. Gender, race, nationality, social class, etc... I don't believe most models will perform very different with a balanced training set.
bart_spoon 2021-06-07 18:18 UTC link
Yeah, she seems to be undercutting her argument for the sake of a pithy sounding statement. Her point seems to be not that AI isn't manmade, but rather that it isn't particularly autonomous, given that it is built using data that is collected and labeled through enormous amounts of human labor. Obviously that doesn't have any relation to what is meant by "artificial", though.
Editorial Channel
What the content says
+0.70
Article 19 Freedom of Expression
High Advocacy Practice
Editorial
+0.70
SETL
+0.26

The entire article is a vehicle for free expression: an interview with an AI researcher offering critical analysis of algorithmic power, bias, and surveillance. Crawford is given direct voice to articulate views critical of Microsoft and other tech companies. The headline itself presents her controversial thesis. Editorial and design choices amplify her expression.

+0.60
Preamble Preamble
Medium Advocacy Framing
Editorial
+0.60
SETL
+0.42

Crawford advocates for recognition that AI systems are not neutral artifacts but embedded in power structures and labor relationships. The framing emphasizes human rights implications: how systems perpetuate stereotypes and empower 'corporations, militaries and police.' This directly engages the Preamble's emphasis on human dignity and freedom.

+0.60
Article 23 Work & Equal Pay
Medium Advocacy Framing
Editorial
+0.60
SETL
+0.42

Crawford's analysis of how 'natural resources and human labour drive machine learning' directly engages with workers' rights to fair compensation and decent work. She frames workers and resource communities as exploited in AI production without adequate benefit or voice. Implicitly advocates for labor protections.

+0.50
Article 1 Freedom, Equality, Brotherhood
Medium Framing
Editorial
+0.50
SETL
+0.22

Article discusses how AI systems encode stereotypes and reinforce existing inequalities, touching on the tension between formal equality and substantive dignity. Crawford's argument implies that neutral technical systems can violate Article 1's aspiration for equality in dignity and rights.

+0.50
Article 17 Property
Medium Framing Advocacy
Editorial
+0.50
SETL
+0.22

Crawford's critique of AI systems as tools 'empowering already powerful institutions' implicitly defends property and proprietary interests of individuals against algorithmic exploitation. Discussion of how natural resources and human labor are captured without appropriate benefit or consent frames this as a property rights concern.

+0.50
Article 26 Education
Medium Framing
Editorial
+0.50
SETL
-0.24

The article implicitly engages with right to education: Crawford's analysis requires readers to develop critical understanding of AI systems' social implications. Her work is educational in framing technology as a sociotechnical phenomenon requiring informed public discourse.

+0.50
Article 27 Cultural Participation
Medium Advocacy Framing
Editorial
+0.50
SETL
+0.22

Crawford's work engages with cultural and scientific participation: she offers intellectual resources for public participation in understanding AI systems. The article presents technical/scientific knowledge in accessible interview format, enabling lay audience participation in technology discourse.

+0.50
Article 28 Social & International Order
Medium Advocacy Framing
Editorial
+0.50
SETL
+0.39

Crawford's entire argument is that current social order—as structured by AI systems—violates human dignity and rights. Her thesis that AI systems 'empower already powerful institutions' calls for a social order where human rights are guaranteed. Implicit advocacy for rights-respecting institutional structures.

+0.40
Article 2 Non-Discrimination
Medium Framing
Editorial
+0.40
SETL
+0.20

Crawford's discussion of stereotypes 'baked into' algorithms directly addresses discrimination. The article frames AI systems as capable of embedding and automating discriminatory classifications based on protected characteristics (race, gender, etc.). Implicitly advocates against discrimination in algorithmic decision-making.

+0.40
Article 20 Assembly & Association
Medium Framing
Editorial
+0.40
SETL
+0.20

The article enables Crawford to advocate for collective recognition of AI's power dynamics and their harms. The focus on institutional empowerment through AI systems implicitly supports the right of people to associate and organize against those systems. No direct advocacy for assembly.

+0.40
Article 25 Standard of Living
Medium Framing
Editorial
+0.40
SETL
-0.22

Crawford's discussion of how AI systems reinforce existing power hierarchies and stereotypes touches on health and welfare implications: algorithmic discrimination affects healthcare decisions, resource allocation, and social services access. Implicit concern with algorithmic equity in welfare systems.

+0.30
Article 12 Privacy
Medium Framing
Editorial
+0.30
SETL
+0.39

Article content does not directly address privacy. However, context of AI surveillance criticism (facial recognition keywords) implies privacy concerns. No direct editorial engagement with right to privacy.

+0.30
Article 29 Duties to Community
Low Framing
Editorial
+0.30
SETL
+0.17

No direct content on limitations on rights or duties. However, Crawford's framework implies that institutional duties (of governments, corporations) are to constrain AI systems' discriminatory and surveillance impacts.

ND
Article 3 Life, Liberty, Security
null

No observable content on right to life or security of person.

ND
Article 4 No Slavery
null

No observable content on slavery or servitude.

ND
Article 5 No Torture
null

No observable content on torture or inhuman treatment.

ND
Article 6 Legal Personhood
null

No observable content on recognition as a person before the law.

ND
Article 7 Equality Before Law
null

No observable content on equal protection before the law.

ND
Article 8 Right to Remedy
null

No observable content on remedy for rights violations.

ND
Article 9 No Arbitrary Detention
null

No observable content on arbitrary detention.

ND
Article 10 Fair Hearing
null

No observable content on fair trial or due process in judicial proceedings.

ND
Article 11 Presumption of Innocence
null

No observable content on presumption of innocence or criminal liability.

ND
Article 13 Freedom of Movement
null

No observable content on freedom of movement.

ND
Article 14 Asylum
null

No observable content on asylum or refuge.

ND
Article 15 Nationality
null

No observable content on nationality.

ND
Article 16 Marriage & Family
null

No observable content on marriage or family.

ND
Article 18 Freedom of Thought
null

No observable content on freedom of thought, conscience, or religion.

ND
Article 21 Political Participation
null

No observable content on political participation or voting.

ND
Article 22 Social Security
null

No observable content on social security or economic welfare.

ND
Article 24 Rest & Leisure
null

No observable content on rest, leisure, or work hours.

ND
Article 30 No Destruction of Rights
null

No observable content on prohibition of destruction of rights.

Structural Channel
What the site does
Domain Context Profile
Element Modifier Affects Note
Privacy +0.05
Article 12
The Guardian implements cookie tracking and analytics infrastructure with consent mechanisms visible in page config. Permutive, Braze, and multiple ad networks are present.
Terms of Service
No explicit TOS content visible in provided HTML.
Accessibility +0.10
Article 25 Article 27
Responsive design (multiple srcsets), alt text support visible, lighthouse optimization evident. Mobile-first responsive architecture supports broad accessibility.
Mission +0.15
Preamble Article 19
The Guardian's editorial mission emphasizes independent journalism and public service. Publishing interview critical of AI power concentration aligns with transparency and free expression values.
Editorial Code +0.10
Article 19
Interview format allows subject direct voice; byline clearly attributed to Zoë Corbyn; metadata shows commissioning desk (observer-new-review).
Ownership
No ownership structure information disclosed in provided content.
Access Model +0.10
Article 26 Article 27
Article marked isAccessibleForFree=true in schema. Registration gates behind signInGate switch. Membership model present but content freely available.
Ad/Tracking -0.08
Article 12
Extensive advertising infrastructure present: 40+ ad/tracking switches enabled (prebid networks, DFP, Criteo, AppNexus, Magnite, etc.). Third-party tracking infrastructure is comprehensive.
+0.60
Article 19 Freedom of Expression
High Advocacy Practice
Structural
+0.60
Context Modifier
+0.25
SETL
+0.26

The Guardian's platform enables this speech through: bylined attribution, free access, prominent placement (6 June 2021 publication in Observer New Review), full interview format, and social media integration. No evidence of editorial suppression or marginalization. commissioningdesks metadata shows deliberate editorial selection.

+0.60
Article 26 Education
Medium Framing
Structural
+0.60
Context Modifier
+0.10
SETL
-0.24

Free access and clear publication metadata support information accessibility. Schema.org markup ensures discoverability. Commissioning desk structure (observer-new-review) indicates deliberate editorial investment in education-like content.

+0.50
Article 25 Standard of Living
Medium Framing
Structural
+0.50
Context Modifier
+0.10
SETL
-0.22

Site implements responsive design, mobile optimization, and accessibility features (srcsets, alt text support, lightbox images). Structural accessibility enables readers with diverse needs to access critical health/welfare analysis.

+0.40
Article 1 Freedom, Equality, Brotherhood
Medium Framing
Structural
+0.40
Context Modifier
0.00
SETL
+0.22

Bylined interview structure ensures direct attribution; commissioningdesk metadata shows editorial oversight.

+0.40
Article 17 Property
Medium Framing Advocacy
Structural
+0.40
Context Modifier
0.00
SETL
+0.22

Site allows free access to property-rights-critical content without paywall.

+0.40
Article 27 Cultural Participation
Medium Advocacy Framing
Structural
+0.40
Context Modifier
+0.20
SETL
+0.22

Free access, responsive design, and clear attribution (author, publication, dates) structure enable cultural participation. Keywords include 'science and nature books' (bisac-prefix), indicating alignment with science communication.

+0.30
Preamble Preamble
Medium Advocacy Framing
Structural
+0.30
Context Modifier
+0.15
SETL
+0.42

Site structure enables interview publication with direct voice; free access; metadata properly attributed. Advertising infrastructure present but does not suppress critical content.

+0.30
Article 2 Non-Discrimination
Medium Framing
Structural
+0.30
Context Modifier
0.00
SETL
+0.20

No structural barriers to reading this analysis on non-discrimination grounds.

+0.30
Article 20 Assembly & Association
Medium Framing
Structural
+0.30
Context Modifier
0.00
SETL
+0.20

Discussion section enabled (discussionAllPageSize, enableDiscussionSwitch set to true), allowing readers to form collective response. commentable is false, limiting direct reader voice below the line.

+0.30
Article 23 Work & Equal Pay
Medium Advocacy Framing
Structural
+0.30
Context Modifier
0.00
SETL
+0.42

Free access and publication platform enable dissemination of labor-critical analysis.

+0.20
Article 28 Social & International Order
Medium Advocacy Framing
Structural
+0.20
Context Modifier
0.00
SETL
+0.39

While the article advocates for rights-respecting order, The Guardian's own advertising and tracking infrastructure undermines privacy rights, creating structural tension. The platform enables expression but uses surveillance-based business model that contradicts the article's message.

+0.20
Article 29 Duties to Community
Low Framing
Structural
+0.20
Context Modifier
0.00
SETL
+0.17

Limited structural signal. Site enables expression but does not enforce duties to protect privacy.

-0.20
Article 12 Privacy
Medium Framing
Structural
-0.20
Context Modifier
-0.03
SETL
+0.39

Structural analysis reveals significant privacy tension: page implements extensive third-party tracking (40+ ad/tracking switches), cookie-based browserId collection, Ophan pageViewId generation, and multiple ad networks (Criteo, AppNexus, Magnite, Permutive, etc.). This contradicts privacy protections while hosting content critiquing AI surveillance.

ND
Article 3 Life, Liberty, Security
null

No structural signals.

ND
Article 4 No Slavery
null

No structural signals.

ND
Article 5 No Torture
null

No structural signals.

ND
Article 6 Legal Personhood
null

No structural signals.

ND
Article 7 Equality Before Law
null

No structural signals.

ND
Article 8 Right to Remedy
null

No structural signals.

ND
Article 9 No Arbitrary Detention
null

No structural signals.

ND
Article 10 Fair Hearing
null

No structural signals.

ND
Article 11 Presumption of Innocence
null

No structural signals.

ND
Article 13 Freedom of Movement
null

No structural signals.

ND
Article 14 Asylum
null

No structural signals.

ND
Article 15 Nationality
null

No structural signals.

ND
Article 16 Marriage & Family
null

No structural signals.

ND
Article 18 Freedom of Thought
null

No structural signals.

ND
Article 21 Political Participation
null

No structural signals.

ND
Article 22 Social Security
null

No structural signals.

ND
Article 24 Rest & Leisure
null

No structural signals.

ND
Article 30 No Destruction of Rights
null

No structural signals.

Supplementary Signals
Epistemic Quality
0.77 medium claims
Sources
0.8
Evidence
0.8
Uncertainty
0.7
Purpose
0.8
Propaganda Flags
0 techniques detected
Solution Orientation
0.42 problem only
Reader Agency
0.3
Emotional Tone
urgent
Valence
-0.3
Arousal
0.7
Dominance
0.5
Stakeholder Voice
0.42 2 perspectives
Speaks: institutionindividuals
About: corporationgovernmentmilitary_securitymarginalized
Temporal Framing
present immediate
Geographic Scope
global
Complexity
moderate medium jargon general
Transparency
0.67
✓ Author
Event Timeline 20 events
2026-02-26 06:04 dlq Dead-lettered after 1 attempts: Microsoft’s Kate Crawford: ‘AI is neither artificial nor intelligent’ - -
2026-02-26 06:02 credit_exhausted Credit balance too low, retrying in 252s - -
2026-02-26 06:01 dlq Dead-lettered after 1 attempts: Microsoft’s Kate Crawford: ‘AI is neither artificial nor intelligent’ - -
2026-02-26 06:00 credit_exhausted Credit balance too low, retrying in 341s - -
2026-02-26 05:58 credit_exhausted Credit balance too low, retrying in 247s - -
2026-02-26 05:57 credit_exhausted Credit balance too low, retrying in 248s - -
2026-02-26 05:56 dlq Dead-lettered after 1 attempts: Microsoft’s Kate Crawford: ‘AI is neither artificial nor intelligent’ - -
2026-02-26 05:56 dlq Dead-lettered after 1 attempts: Microsoft’s Kate Crawford: ‘AI is neither artificial nor intelligent’ - -
2026-02-26 05:56 credit_exhausted Credit balance too low, retrying in 321s - -
2026-02-26 05:55 credit_exhausted Credit balance too low, retrying in 334s - -
2026-02-26 05:54 credit_exhausted Credit balance too low, retrying in 358s - -
2026-02-26 05:51 credit_exhausted Credit balance too low, retrying in 322s - -
2026-02-26 05:49 dlq Dead-lettered after 1 attempts: Microsoft’s Kate Crawford: ‘AI is neither artificial nor intelligent’ - -
2026-02-26 05:47 dlq Dead-lettered after 1 attempts: Microsoft’s Kate Crawford: ‘AI is neither artificial nor intelligent’ - -
2026-02-26 05:44 credit_exhausted Credit balance too low, retrying in 326s - -
2026-02-26 05:42 credit_exhausted Credit balance too low, retrying in 258s - -
2026-02-26 05:42 dlq Dead-lettered after 1 attempts: Microsoft’s Kate Crawford: ‘AI is neither artificial nor intelligent’ - -
2026-02-26 05:41 credit_exhausted Credit balance too low, retrying in 318s - -
2026-02-26 05:36 credit_exhausted Credit balance too low, retrying in 344s - -
2026-02-26 05:36 credit_exhausted Credit balance too low, retrying in 245s - -
About HRCB | By Right | HN Guidelines | HN FAQ | Source | UDHR | RSS
build 59cf82e+tpso · deployed 2026-02-26 02:38 UTC · evaluated 2026-02-26 04:51:33 UTC