Digital Colonialism: People Using Technology

Photo by fikry anshor / Unsplash

The rise and proliferation of neocolonial surveillance corporations, two reports advance ideas of a digitally-powered form of 'surveillance capitalism' and the economic exploitation of society through corporate and state mass surveillance.

The primary author is Michael Kwet from the Yale Law School.  He also contributes to The Intercept and wrote this long form essay at the Transnational Institute. Here is his doctoral dissertation on education technology in South Africa.

The central idea is that US-based transnational corporations, led by “GAFAM” (Google, Amazon, Facebook, Apple, Microsoft) continuously collect little details about us from our computers, mobile phones, and internet activities, in doing so they amass big data which is then utilised to sell ads and data insights to an array of buyers (state organisations, private companies, spy agencies, etc).  To me, this is logical, rational and backed by numerous instances of fact.

The report tells the story of how  the structure of the internet changed over the past two decades. What began as a decentralised information system which allowed people to communicate without easy government and corporate censorship was quickly captured by state-corporate power. A handful of US-based transnational corporations, led by GAFAM (Google/Alphabet, Amazon, Facebook, Microsoft, and Apple), became the five richest corporations in the world, with a combined market value topping $5 trillion.

The short answer is that US transnationals took over the digital ecosystem – software, hardware, and network connectivity.

Photo by Nastya Dulhiier / Unsplash

Because US-based corporations own and control the technical architecture of the digital ecosystem, they can design it to exploit the global society. They use their power as infrastructure owners to shape the flow of information, extract rent, and monetise Big Data surveillance. They also use their resources to influence the law, control new innovations, and set the ideological frame for how a digital society should work.

The most memorable part of the 'People's Tech for People's Power' paper was it's comparison to nineteenth century colonisation of Africa:

During the late nineteenth century, white colonizers seized the mineral-rich land in Kimberley and Johannesburg, owned and operated the heavy machinery necessary to drill deep underground, fashioned the chemicals required to exploit the minerals, recruited the engineers needed to produce all of these technologies for industrial-scale looting, and forced people of color into cheap and menial labour. US engineers swooped in for the bounty. European missionaries waged intellectual warfare on the population to compel obedience to brutal, racialised oppression. Similar patterns of colonisation were repeated throughout the world.
Today, the “open veins” of the Global South are the “digital veins” crossing the oceans, wiring up a tech ecosystem owned and controlled by a handful of mostly US-based corporations. The transoceanic cables are often fitted with strands of fibre owned by corporations like Google and Facebook, for the purpose of data extraction. The cloud centres are the heavy machinery owned by the likes of Amazon and Microsoft, proliferating like military bases for US empire, with Google, IBM, and Alibaba following behind. The engineers are the corporate armies of elite programmers numbering the hundreds of thousands, with generous salaries of R4 million ($250,000) or more as compensation.
The exploited labourers are the people of color producing the minerals in the Congo and Bolivia, the armies of cheap labour annotating artificial intelligence data in China and Africa, the East Asian workers enduring PTSD to cleanse Big Social Media of graphic content, and the vast majority of people asked to specialise in non-digital goods and services in a worldwide division of labour. The US is at the helm of advanced economic production, which it dominates through the ownership of intellectual property and core infrastructure, backed by imperial trade policies at the World Trade Organisation. The missionaries are the World Economic Forum elites, the CEOs of Big Tech corporations, and the mainstream "critics" in the US who dominate the “resistance” narrative, many of whom work for or take money from corporations like Microsoft and Google, and integrate with a network of US-Eurocentric intellectuals drawn from elite Western universities and media outlets.
Slaves cutting the sugar cane on the Island of Antigua, 1823
Photo by British Library / Unsplash

Impact

The most impactful part of this Digital Colonial perspective (to me) is the inaccessible walled gardens of processed data curated by psychologists and behavioural scientists. Beyond that I am curious about the processed data and it's use in predictive artificial intelligence systems. Those concepts in sci-fi Hollywood movies but not as glamorous.

Cambridge Analytica famously used the O.C.E.A.N model or five factors of personality; Openness to experience, Conscientiousness, Extroversion, Agreeableness and Neuroticism to analyse and psycho-graphically profile people. Research has shown that these factors are interconnected, and also connect with many other aspects of one’s life. People are able to be described in a few targeted keywords, such as 'warm' or 'gullible', and subsequent digital activities (posts or advertising targeting systems) can be employed to target people with a specific tendency.

For example, Cambridge Analytica suggested to target undecided voters that showed a pattern of neurotic or paranoid online behaviour with pro-Brexit, anti-European and anti-Obama ads.

I am very certain that larger data-informed organisations, have adopted this lens into customer profiling and user profiling as well.

Rise of Predictive Analytics via Machine Learning

What is not being discussed is the ingestion of various data sources into machine learning programs that undertake predictive psychological modelling on a grand scale.  To illustrate:

It's 2021. We know that a broad section of the population has various responses to vaccines and COVID-19 prevention techniques. We can know for a relative certainty how many people have a vaccination and how they think (with a degree of error).

We can probably (with the right laws in place) merge both public health records and OCEAN data from social media companies to form a master spreadsheet or segment analysis of population agreeableness or resistance to vaccination. Then what we can do is use this as a baseline to simulate feeding each of these audience groups simulated posts and measuring the anticipated computed feedback. We then compare this simulated post/messaging with real messaging using real data and bridge gaps in our model. Over some cycles and iterations we build a more accurate predictive model of responses to messaging. We do this for a number of thousand cycles and tweak the main parameters of each segments psychology profile.

Over time, we can start to run many models on a machine learning instance (think of this running 1000 cycles every hour) to determine all the possible variances in population responses.

The technology and resources required to do this include:

  • Government records (currently available for government projects)
  • Social media records (currently available for government projects)
  • Scientific analysis (currently available)
  • Machine learning instances (currently available)
  • Data aggregation and cleaning machines (currently available)
  • Annotation services  (currently available)

This kind of analysis is not yet regulated. I wonder what branch of government this falls into? Or is it primarily managed and guided by privately owned technology companies who can provide the bulk of the services?

Hollow Desire
Photo by Alex Iby / Unsplash

Threat Modelling

Different people face different risks from surveillance and predictive modelling. Most of the Hollywood movies deal with the scenario of being falsely accused and I think this is definitely viable as the error points in the modelling are significant.

An activist or journalist might face severe threats like jail time or assassination, while the average person might face other lower level repercussions, such as consumer manipulation or cyber theft. Careless people could easily have their identities used as a slave in a wider sophisticated data-integrity attack.

Keep Your Teeth Clean. Poster published by [Rochester, NY]: Federal Art Project, [between 1936 and 1938]. From the Work Projects Administration Poster Collection. Library of Congress Prints & Photographs Division. 

https://www.loc.gov/item/92517367/
Photo by Library of Congress / Unsplash

As such, Michael Kewt recommends taking a look at threat model.

A threat model is a plan of action to handle threats you face from adversaries. The Electronic Frontier Foundation suggests each person ask the following questions:

1. What do you want to protect?
2. Who do you want to protect it from?
3. How likely is it that you will need to protect it?
4. How bad are the consequences if you fail?
5. How much trouble are you willing to go through to try to prevent those consequences?

Coping with 21st century surveillance is similar to coping with security in the physical world. In the physical world, you may protect your home by closing your windows or locking your doors, in the digital world you will need to identify which digital doors and windows you leave open.

A personal approach

I cannot yet fully avoid identify scraping technologies or hide myself, so I choose a a balanced approach that works for my needs. I allow visibility in return for some personal benefit. I deleted my Facebook account and discontinued using Instagram in November 2011, it's been great being off as the balance was definitely in favour of Facebook. With their products, my biggest source of concern is WhatsApp as my family uses it and it is still owned by Facebook.

I use Firefox and the Brave browser (not recommended by the report).

However, Brave offers an optional advertising service that pays users small amounts of money – paid out in Brave’s cryptocurrency – in exchange for viewing ads. This puts pressure on low income people to view advertisements, which are often deceptive and push a consumerist lifestyle on the public. It also perpetuates an ad-supported media model which is corrosive to democracy and media independence.

Yes, I get paid BAT tokens for receiving ads on Brave. Over the last year I earned 45 in BAT tokens. I want to participate in the crypo/token eco-system. I am not sure I agree with the statement that ad-supported media model is corrosive. I think the BAT/Brave model reduces the corrosion because you have a choice of what you invest time into - ads or views. I feel that is a fair choice.

I do subscribe to other end-to-end encrypted or decentralised technologies, but I won't list them here.

Summary

If the product you are using is run off a cloud, the company hosting that cloud exercises authoritarian control over your experience because you cannot access or modify the software running on their physical servers. How the corporation designs the user experience is final, they are the critical middle-agent. You cannot control someone else’s computer but you can choose to interact with it or not.

"There is no cloud, just someone else’s computer."

It may be likely that you have to make trade-off. I'm grateful I don't have the stress of work at a high security organisation with hidden personal identity requirements, NFC login devices, custom operating systems, strict login procedures and biometrics. I can also see the merit in other states setting up their own security protocols, China’s “Great Firewall” and France ehas it's own search engine Qwant.

My final thought it to promote and use decentralised internet services and decentralised cryptographic shared ledger systems. Use commons-based solutions embodying principles of self-governance, decentralisation, and federation. Try out these services:

Source: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3748901
Scuttlebutt
Scuttlebutt
Source: https://giswatch.org/
Kingi Gilbert

Kingi Gilbert

Producer. Ex-Saatchi & Saatchi, ex-Video Game Producer. Director Ignite Studios. Studied Entrepreneurship Acceleration @ Wharton, and Advertising & Marketing @ A.U.T.
New Zealand & Hawaii