I know for sure that I am not the only one who feels it, but there is something profoundly sickly about creating and putting things online these days, compared to how it was (think pre 2010). The irony of this reflection post, or rant if you will, is not lost on me.
AI is of course on the front of my mind as I write this, and while it certainly is a worthy aspect to consider, it is not the only part that makes this all feel different. Big data and data analytics came before the current AI wave and made it abundantly clear for everyone remotely interested in technology and personal liberty that a massive shift had taken place. Gone are the days when a post online was just a post online. Now we have entered a space where all posts and all interactions could more or less be connected and profiles could be built on you in real time as you consume and interact. Your interests, likes and dislikes all being mapped out and sold to bidders - usage ranging from selling trinkets to nudging you towards certain political sentiments. The foremost example of this is the Cambridge Analytica scandal of the 2010s, but these were not the only actors in this space and there have been many documented cases where data points has been sold that can be connected to one individual in the "real world". Links at the bottom.
If being able to connect online activity to physical human individuals has been a thing for such a long time, what has changed? For me it is the presumed ease with which it is happening. No longer limited to the walled gardens of social interaction controlled by the likes of Meta, now we have taken it one major step further. Articles and reporters are describing that the current generation of AI models can automatically manage to identify a person across multiple online platforms based on their manner of expression. Their mannerisms, grammar and style of typing is in aggregate so distinct that they can connect the seemingly isolated online activities of multiple accounts into one person, and the total of this information can leak sufficient information that the online world and the physical world becomes one. In effect creating a risk of doxxing yourself through your manner of engagement. But surely that only means that we need to be extra careful not to post anything that directly relates to our physical world or to our actual identities when engaging, especially if we are engaging in contested topics? Here new law and regulation, even when well intentioned, regarding identification before engagement on news platforms and similar, makes the process all the easier. Remember that we are not trying to connect only concrete nuggets of information from multiple sources to create a profile for a person, it is the very mannerisms, style and choices of grammar that tell on you.
Combatting these approaches has been done before, I remember watching a presentation from the defcon convention some time ago on how to sell drugs on the dark web, where the presenter went into great detail on how he operated in order to avoid being caught by the police. He was eventually caught, but he alluded to not having been convicted of the full scale of his operations. They could not create sufficiently strong correlations between his various time-limited operations. His operational security involved creating detailed avatars which he would role play, making a very conscious effort not to allow his operations to use the same mannerisms and grammatical style. All in an effort to create a great disconnect between both the different operations and his physical self.
While I do think that this sort of operational security makes a ton of sense when engaging on topics of contention, I do not believe that it will be sufficient. There are multiple reasons for this, one is that it is very hard to avoid the subconscious decisions you make when engaging, there are thousands of choices being made when you produce text, all of which can tell something about you. Secondly it creates a massive lag between consumption and engagement, as well as a large investment of effort in creating, managing and using these personas. Thirdly, there is a highly asymmetrical aspect surrounding engagement. Your one disagreeable engagement can blow up and be used against you, without thought or reflection, while thousands of agreeable previous and subsequent engagements come to mean nothing. You are judged solely on the basis of your worst, and the worst does not even have to have been "bad" at the outset. Context gets stripped, lost or ignored and your attitude and beliefs are reduced to the worst possible interpretations. Gone are the days of compassionate interpretation, and long gone are the days when the medium of expression lent itself to an anticipation of completeness. The media of today are far too quick for such.
So then, what is the point of engaging or mumbling into the void?
Finding high-minded reasons for engaging and putting things out are easy, "we" can't just let the "bad guys" win, right? If they get to control the discourse, then it becomes a self-fulfilling prophecy of the worst sort - so instead we should all engage and strive to create a healthy and rich climate for ideas and discussion. To be honest, I am not sure if I buy any of this. I am too much of a pessimist to think that we will see a positive turn in how content is used, and I do not think the social climate lends itself to open and honest engagement. Nor do I think that this will change in the foreseeable future.
Thus the title "Feeding the machine" - I am just publishing human-created slop for the machine-controlled slop machine, just another datapoint. But maybe there is just a tiny part of me that still wants to hold to the ideals of thoughtful and genuine engagement, then it all might become just a bit better?
Data Brokers and Sensitive Data on US Individuals (Duke University)