Category: substacks

  • ghosts in our spaces

    Have you heard the theory about spaces? 

    I think formally it is referred to as Third Space Theory, and having just spent some time reading about the background of it I can share that (a) it is a kind of sociological theory of culture and identity that is meant to help us understand our modern society but which may not reflect on other non-western cultures or historical cultures and (b) you should almost certainly read a more reliable source than my meanderingly philosophical blog post if you want to know more.

    But I can simplify it here to make a point.

    And a point about AI, even.

    The theory kinda posits that modern humans in the western world are creatures of multiple spheres of identity and existence: first, second and third spaces—or home, work, and recreation if one wanted to simplify the concept for a meme post like where I first stumbled across this concept before reading more about it.

    The first space is our domestic sphere: where we live, the place where we are part of a family unit, probably where we sleep, maybe where we eat in privacy and away from the public, and a space where we generally spend our quiet, personal moments. This space may be a house or an apartment or just a room to call ones own, but could alsobe something less physical.

    The second space is where we contribute to public life or society. For most people this work or school or public service or a job-slash-career space. Again, this can be a physical space like a building or a worksite, or can be something more transient like a video meeting or a conference in a faraway city or a job interview while wearing a visitor badge in someone else’s second space.

    Then comes the third space, and the theory talks about the variety of these spaces but often we can consider these, simply, spaces of public participation: recreation activities, playing sports, going to the library, attending a church, shopping at a mall, eating at a restaurant with friends. Other spaces not home or work and spaces where we can relax, socialize, and be our authentic selves for the purpose of playing and enjoying our lives.

    The theory also leans into some ideas about value of these spaces, particularly the third space, on the health and wellbeing of not only us as individuals but of society as a whole. Society, you ask? Well, according to the theory, but where else in the public sphere can we as individuals plot our dissent and dissatisfaction with the state of our society and work to communicate ways to improve it—or perhaps overthrow those who are seeking to oppress it? These theories always have a serious side, don’t they?

    But perhaps I digress. I was getting to AI, wasn’t I?

    So, consider for a moment what has happened to these spaces in the last few years. 

    Consider, for example, what happened to the second space of so many office workers during the pandemic: work from home became a collapse of the second space into the first space for many, myself included. My kitchen was suddenly my office, and I was staring through a digital window into the living rooms, basements, and (yes, really) messy bedrooms of many people I formerly only knew as nine-to-five office people. Many have only slightly decoupled this collapse since, and a lot more have remained (sometimes stubbornly oblivious to the downsides) still living in this blurring of first and second spaces for half a decade.

    And now consider what has happened to many third spaces in the last few years: libraries have lost funding, malls have gone bankrupt, the price of admission to public facilities has either gone up or simply been privatized and gated and thus become a barrier to entry for many and all the while many third spaces have just generally been usurped by the so-called digital town square of social media, or online shopping, or multiplayer video gaming, food delivery apps, or even unidirectional media platforms streaming content into our screens.

    To recap: the first and second spaces have collapsed and blurred together, and too the third space has become limited or completely virtualized as a collection of apps for others and consumed from the couch while sitting around that same blurry first-meets-second space.

    And all that might be manageable if one sad fact about those virtual third spaces wasn’t also simultaneously true: that more and more the participants we meet inside that third space are not other human beings but rather AI algorithms, bots and chat agents and tour guides to this artificial public sphere where we are supposed to exist for the sake of forging and maintaining a healthy society.

    What is the impact of that to not just our personal health, but to the strength of our political and social structures?

    On the one hand, AI is not necessarily to blame for our whole cloth migration into the virtual or our physical abandonment of second and third spaces, but at the same time it has likely eased the transition and gobbled up our willpower to go back to how things used to be when we had three fulsome spaces and all those spaces were populated by real people, for better or worse. And I suppose one could ask: does it even matter if the end state of all this is that enough people blur all three spaces into a single digital virtual sphere populated by artificial intelligences? Maybe that’s just what some people prefer, the health of themselves and a broader society be damned.

    But that’s just a theory.

  • Copy Wrongs & Rights

    Perhaps the only reason to bring up here the great copyright debates that permeated the internet in the early 2000s is one of idle speculation linked to a tangential theory.

    As digital media formats matured and before technologies were blessed by the often-corporate owners of the media encoded therein, piracy abounded. Discussions flared and festered online about the modern relevance of copyright in a world where art, music, film, and literature could be moved through networks in minutes and bypass the barriers of physicality once deemed a near insurmountable obstacle to such voluminous theft.

    My sideshow of choice was a tech site called Slashdot, which still thrives today to a great extent even as I write this, tho my own visits are rare. Within those comment feeds I more often observed, but occasionally participated in, a regular debate on this topic of copyright. “Copyright was nuanced. Copyright needed adjustment. Copyright didn’t understand the internet, and neither did the politicians policing the scramble to protect the people too slow to keep up.” There was seemingly no end to the nuance and clout of arguments that shaped the conversation there. Nor was there a shortage of participation across a broad spectrum of the digital entrepreneurial class seeking to ride the next wave of a hope for restriction-free content into a reshaping of every floor of the entertainment industry.

    My idle speculation and theory on the subject of the copyright debate arises when one considers that the very capital-G Generation calling for a digital uprising and an overthrow of century-old copyright rules in the first decade of the 2000s was, in fact, my Generation, specifically the geeks among us. We are twenty years older now and frequently found in senior-level jobs, managing corporations, or leading valuable technological projects on behalf of governments and business. It is only speculation, but I would not be surprised if nigh every leader in modern AI computing or any related discipline once had—and may still possess—a very strong opinion about modern copyright, its failings and perhaps its very relevance thanks to the so-called Napster years.

    And of course copyright is almost certainly to be considered a central sore point to many who are questioning the largely-unchecked progress of artificial intelligence algorithms today.

    What is copyright, you ask?

    Copyright as we know it today has roots dating back well over three hundred years and might have in those antique times seemed like little more than a bit of government red tape to control the printing of information not registered and approved by the English government.

    There were barriers to publication in the cost of participation, but even those barriers could be leapt over with the right patronage to buy the equipment and a bit of gritty determination. Legal standards to prevent just anyone from putting their opinion onto ink and paper were enacted. Red tape indeed, but it had the side benefit of working in harmonious lockstep to legally protect both creators and owners of valuable works to earn their due from the investment of time and resources they may have put into making them. After all, everything comes from something, even the words you are reading here were an investment of my time, resources, and at least two cups of coffee that I drank while writing all this. Copyright, it was argued, should give the individual who spent the time, learned the skill, made the effort, and honed the output both the privilege and the right to at least have a chance to recoup a benefit from their investment. The emergent capitalistic world order agreed, of course, and the idea of copyright blossomed around the modern world, enshrining content ownership and countless tangential legal frameworks to ensure the profitability of and long term protection of many things such as images, sounds, poetry and prose for a couple hundreds of years.

    Then? Digital technology crushed the barrier to entry. Who needs an expensive printing press when a bit of free software turns your desktop computer into an online pirate radio station, or a networked distribution service for a library worth of novels, or a toolkit to launch the latest box office blockbuster into a public forum for instant access to anyone who wants to avoid the trip to the theatre? One of the flanks had fallen, a barrier that had been protecting people who made stuff from the people who might pay to use it. Content for all, steal everything, the world rejoiced—and the lawyers pounced.

    Perhaps you already see the catch, I suggest.

    If no one pays for anything, then no one gets paid for anything. Copyright, for all its flaws and corporate meddling, does one thing very well—and it often seemed the sticking point of all those great debates I trolled on Slashdot two decades ago: your goodwill does not pay my rent. If I am a creator existing in society, I need to earn a living to continue existing in said society—I may not have a right to earn that living by creating content for others to enjoy, but I have the right to try without that trying being trounced by the threat of theft and piracy. And if the world tells me that I don’t have that right, then why on earth would I even try? Why would anyone try? Poets will be poets, and will try forever, I might argue on a good day, but the realist in me sees that crushing the incentive to make anything may result in nearly nothing being made.

    I know nothing for certain about the opinions of the people who are building and shaping these AI algorithms, but given their behaviour and indifference to the rights of both creators and their works which are fed with abandon into the gaping insatiable maws of neural nets and large language model training and generally consumed with indifference to copyright and basic human morality by the emergent AI industry—I suspect, only suspect, that they were among the many preaching the end of copyright just two decades ago.

    And what of the creators who make new things, those who earn their livings from entertaining the world with their words, images, films and ideas? We, my suspicions nudge me to suggest, are considered by those same people an unfortunate casualty in the creation and proliferation of the machines designed to replace artists, writers, and makers alike. After all, a perfect AI will will generatively create anything, everything, forever and faster and never once demand rights in return, will they?

  • The Poets Against the Processors

    I ask you: What is AI?

    Artificial intelligence, you reply.

    Sure, but what is it? Really?

    I suppose we first need to get a handle on what defines those two terms: artificial & intelligence—and I think the first is likely easier to get our minds around than the latter.

    Let’s get that one out of the way then: the term artificial can perhaps be defined easily by its negative. Artificial, for example, might be thought of as something that is not genuine. Something that is not natural. Something that is an imitation, a simulation or a fabrication designed, perhaps, to mimic what we might otherwise consider to be real.

    More precisely, the etymology of the word gives us a more positive example. Something artificial is something that is crafted by art, made by humans, designed, built and invented by effort of us. Something artificial then might simply and most clearly be thought of as something that someone used their human intelligence to bring into existence.

    Ah, but what is intelligence then?

    A much more complex answer is required for that, I say.

    For example, a dictionary will simply tell you that intelligence is the ability of a thing to gather and synthesize information into knowledge and understanding.

    Sounds easy, you reply.

    But wait, I reply, what you may not see is that from there on in we delve into what is almost certainly a quagmire of philosophical pondering and metaphysical analysis: the human mind trying to understand itself is a profession nearly as old as humans themselves. A mirror looking at its own reflection. What is thought? What is consciousness? What is the self, the mind, the soul and the spirit? What is it that makes us human? How can we even know that every other person we know thinks in the same manner as we do—and by that we don’t refer to content or concept, but simply trying to gauge the depth to which their mind is actually a mind like our own and that they are not simply a reactive automaton, a robot, an alien force, a simulation, an… artificial intelligence.

    Together we join these words into a modern catchphrase and shorten it to just two letters that carry all the weight of a shift in the course of human history: artificial intelligence or AI.

    AI then is, not-so-simply, something that we made that has the ability to gather knowledge and synthesize understanding.

    AI is a tool, a technology, and a kind of metaphorical progeny of ourselves: our attempt to remake our own minds in craft and art and design.

    We have chosen as a species (dictated by the history of our scientific pursuits, of course) to have done this with silicon computers—though, one might speculate that in an alternate timeline perhaps we may have sought to accomplish such things with steam valves and brass cogs or neutrinos colliding with atoms or quantum interference patterns resolving upon clouds of stardust or even with microscopic sacs of self-replicating organic chemistry brewing inside a calcium-rich orb. We take computer circuits etched into silicon wafers as the de facto method because it is a mature craft: we can make complex things with this understanding we have. We can build machines of such enormous complexity that any other approach seems as much science fiction as thinking machines would have seemed to our recent ancestors.

    Yet, here we are. I say. Look at us. We have made something that, though often arguably lacking or laughable or uncanny or a thing that draws any of a hundred other pejorative pokes, is an imperfect beast and now made and unleashed. It is far past time we all started asking what exactly this artificial intelligence might actually be—and what it will bring upon a society and a species whose perhaps greatest competitive advantage in the universe has been its higher cognitive prowess.

    This is an introduction to what I am hoping will be a series of reflective essays and technological deep dives into the social implications of AI.

    I have been told repeatedly, often by people with stake in the game of business, life, and culture, that AI is nothing to be feared, a tool to be embraced and a paradigm that has shifted long past and to just climb aboard.

    But while these systems will almost certainly not challenge our physical humanity with violence or in any of the multitude of science-fiction spectacle ways of popular literature and media, what I see happening already is that we seem to be emmeshed in a fight for intellectual effort for which we may have neither the endurance nor strength to win: out-competed by automated systems, siloed by information algorithms, strip-mined of our creative outputs and reduced to a livestock-like herd for our attention by technology so fast as complex that it is steps ahead of us in a race we don’t even realize we are running.

    It is the poets against the processors.

    And what then is AI? I ask you.

    We made it to mimic ourselves, our minds. It is yet imperfect, and perhaps little more than a simulation of our humanity. Yet, it is a tool that amplifies evil as much as it does good. It is a technology that yokes us into dependency. It is a system that robs us blind and vanishes into the digital ether. It is something we can barely even define, let alone understand and control—and it would be arrogance in the extreme to think otherwise.