Tag: ai

  • ghosts in our spaces

    Have you heard the theory about spaces? 

    I think formally it is referred to as Third Space Theory, and having just spent some time reading about the background of it I can share that (a) it is a kind of sociological theory of culture and identity that is meant to help us understand our modern society but which may not reflect on other non-western cultures or historical cultures and (b) you should almost certainly read a more reliable source than my meanderingly philosophical blog post if you want to know more.

    But I can simplify it here to make a point.

    And a point about AI, even.

    The theory kinda posits that modern humans in the western world are creatures of multiple spheres of identity and existence: first, second and third spaces—or home, work, and recreation if one wanted to simplify the concept for a meme post like where I first stumbled across this concept before reading more about it.

    The first space is our domestic sphere: where we live, the place where we are part of a family unit, probably where we sleep, maybe where we eat in privacy and away from the public, and a space where we generally spend our quiet, personal moments. This space may be a house or an apartment or just a room to call ones own, but could alsobe something less physical.

    The second space is where we contribute to public life or society. For most people this work or school or public service or a job-slash-career space. Again, this can be a physical space like a building or a worksite, or can be something more transient like a video meeting or a conference in a faraway city or a job interview while wearing a visitor badge in someone else’s second space.

    Then comes the third space, and the theory talks about the variety of these spaces but often we can consider these, simply, spaces of public participation: recreation activities, playing sports, going to the library, attending a church, shopping at a mall, eating at a restaurant with friends. Other spaces not home or work and spaces where we can relax, socialize, and be our authentic selves for the purpose of playing and enjoying our lives.

    The theory also leans into some ideas about value of these spaces, particularly the third space, on the health and wellbeing of not only us as individuals but of society as a whole. Society, you ask? Well, according to the theory, but where else in the public sphere can we as individuals plot our dissent and dissatisfaction with the state of our society and work to communicate ways to improve it—or perhaps overthrow those who are seeking to oppress it? These theories always have a serious side, don’t they?

    But perhaps I digress. I was getting to AI, wasn’t I?

    So, consider for a moment what has happened to these spaces in the last few years. 

    Consider, for example, what happened to the second space of so many office workers during the pandemic: work from home became a collapse of the second space into the first space for many, myself included. My kitchen was suddenly my office, and I was staring through a digital window into the living rooms, basements, and (yes, really) messy bedrooms of many people I formerly only knew as nine-to-five office people. Many have only slightly decoupled this collapse since, and a lot more have remained (sometimes stubbornly oblivious to the downsides) still living in this blurring of first and second spaces for half a decade.

    And now consider what has happened to many third spaces in the last few years: libraries have lost funding, malls have gone bankrupt, the price of admission to public facilities has either gone up or simply been privatized and gated and thus become a barrier to entry for many and all the while many third spaces have just generally been usurped by the so-called digital town square of social media, or online shopping, or multiplayer video gaming, food delivery apps, or even unidirectional media platforms streaming content into our screens.

    To recap: the first and second spaces have collapsed and blurred together, and too the third space has become limited or completely virtualized as a collection of apps for others and consumed from the couch while sitting around that same blurry first-meets-second space.

    And all that might be manageable if one sad fact about those virtual third spaces wasn’t also simultaneously true: that more and more the participants we meet inside that third space are not other human beings but rather AI algorithms, bots and chat agents and tour guides to this artificial public sphere where we are supposed to exist for the sake of forging and maintaining a healthy society.

    What is the impact of that to not just our personal health, but to the strength of our political and social structures?

    On the one hand, AI is not necessarily to blame for our whole cloth migration into the virtual or our physical abandonment of second and third spaces, but at the same time it has likely eased the transition and gobbled up our willpower to go back to how things used to be when we had three fulsome spaces and all those spaces were populated by real people, for better or worse. And I suppose one could ask: does it even matter if the end state of all this is that enough people blur all three spaces into a single digital virtual sphere populated by artificial intelligences? Maybe that’s just what some people prefer, the health of themselves and a broader society be damned.

    But that’s just a theory.

  • the emdash conundrum

    What is that mysterious double-dash and why isn’t it a red flag for inhuman writing?

    Maybe you’ve heard this one before: as I write these words there’s a post actively circulating written by some guy who can tell you “one simple trick” for spotting generative AI content online.

    “Look for the emdash!” he writes. “It’s a dead giveaway.”

    Thus we come to the problem of the emdash.

    Oh, you know what an emdash is, right? Oh, sure, you know—but that guy reading over your shoulder doesn’t so I’ll explain here so that he can keep following along.

    Simply, an emdash is just punctuation. 

    We use all sorts of punctuation in English writing and the kinds of punctuation one uses is often a matter of the form and formality of said writing. There are punctuations that get used to mark the end of sentences, say, most commonly periods, exclamation marks, and question marks. There are “quotation marks” both double and ‘single’ that call out words or phrases as a kind of contextual clue that these are someone else’s thoughts, words, ideas, or have broader meaning beyond the text one is reading. And then there are all sorts of helper punctuations that get used to help simulate the cadence of speech patterns like pauses and passes towards new ideas. These include commas, parentheses, colons, semi-colons, ellipses, and—you guessed it—emdashes.

    Emdashes are probably the least well known, and infrequently used of the bunch, and basically are just a double-dash. A single dash might tie a pair of words together, where an emdash might tie a pair of concepts or sentence-fragments together, and are often employed (at least as I have found) as a more informal version of the semi-colon, a hint to the reader that a conversational tone is implied as one reads and very much used like a pregnant pause or a “get ready for this” beat in the reading.

    And I’ll tell you what else: speaking as a writer myself, they are fun to use stylistically once one starts to think in that vibe and to think of someone reading the words on the page better matching the cadence in the writer’s head. Also, with modern variable-width fonts pretty much standard now, they make even more sense than even a decade or two ago with their less-relevant strict type-setting rules. In other words, people are using them a lot more these days, particularly for casual writing.

    One guess what is busy slurping up a lot of modern, casual writing these days and using it to emulate human conversational writing styles.

    Yeah—AI.

    So. Here we are at the emdash problem: when an increasing majority of content is rapid-generated by AI engines, and those engines are emulating the most modern of casual writing that they have pilfered and scraped from public websites, it was then almost inevitable that (a) those generated AI texts were going to use punctuation trends that are common in text that was written in the last decade and (b) any human reader who is used to more formal writing would immediately misidentify this less-common, human-mimicked punctuation as a red flag gotcha for generative text.

    Yet, it will never be so easy. Don’t let down your guard.

    I can tell you this with confidence because almost everything I’ve written (at least, written casually) in the last couple of years has made frequent use of the emdash as a stylistic choice. I like the emdash. I use the emdash. And your objection to my use of the emdash is no more valid than telling an artist they use too much blue paint or a musician that their choice of chord progression is wrong: these are stylistic choices and—fuck off, I’ll write how I want to write.

    People—human people—use the emdash and it is not a dead giveaway for anything, not even AI.

    Like everything else we see online, we need to be a little suspect and cautious: we now have the job to use our brains to unravel the source of authorship, and yeah—guess what!—there is no easy quick trick to deducing origin anymore. It’s a toss up if a human or an algorithm wrote it.

    The author of this particular viral meme accusation against all emdash-containing text is not entirely wrong. I mean, kinda mostly wrong, but not completely wrong. There will almost certainly be a trend towards greater use (and misuse) of emdashes in generative text for a while and for the very reasons I wrote above… but emdashes are no smoking gun nor flawless clue. They are but a single part of a complex profile of the origins of the modern written word, a profile that will get more complex as each day passes and more algorithmically generated content floods our feeds. We need to use our very human brains to detect these things and always be skeptical of sources and authors, but this means doing research to understand those sources by seeking to find profiles and consistent histories of the real people writing things, testing ideas against multiple perspectives, and shining sunlight on simple and stupid solutions to the complex problems we will face in the challenging of our own humanity.

    AIs didn’t invent the emdash, and insisting they did is an insult to the thousands of real humans who have adopted this as part of a stylistic toolkit and are trying to write interesting things in what is already an uphill battle against the processors.

  • Copy Wrongs & Rights

    Perhaps the only reason to bring up here the great copyright debates that permeated the internet in the early 2000s is one of idle speculation linked to a tangential theory.

    As digital media formats matured and before technologies were blessed by the often-corporate owners of the media encoded therein, piracy abounded. Discussions flared and festered online about the modern relevance of copyright in a world where art, music, film, and literature could be moved through networks in minutes and bypass the barriers of physicality once deemed a near insurmountable obstacle to such voluminous theft.

    My sideshow of choice was a tech site called Slashdot, which still thrives today to a great extent even as I write this, tho my own visits are rare. Within those comment feeds I more often observed, but occasionally participated in, a regular debate on this topic of copyright. “Copyright was nuanced. Copyright needed adjustment. Copyright didn’t understand the internet, and neither did the politicians policing the scramble to protect the people too slow to keep up.” There was seemingly no end to the nuance and clout of arguments that shaped the conversation there. Nor was there a shortage of participation across a broad spectrum of the digital entrepreneurial class seeking to ride the next wave of a hope for restriction-free content into a reshaping of every floor of the entertainment industry.

    My idle speculation and theory on the subject of the copyright debate arises when one considers that the very capital-G Generation calling for a digital uprising and an overthrow of century-old copyright rules in the first decade of the 2000s was, in fact, my Generation, specifically the geeks among us. We are twenty years older now and frequently found in senior-level jobs, managing corporations, or leading valuable technological projects on behalf of governments and business. It is only speculation, but I would not be surprised if nigh every leader in modern AI computing or any related discipline once had—and may still possess—a very strong opinion about modern copyright, its failings and perhaps its very relevance thanks to the so-called Napster years.

    And of course copyright is almost certainly to be considered a central sore point to many who are questioning the largely-unchecked progress of artificial intelligence algorithms today.

    What is copyright, you ask?

    Copyright as we know it today has roots dating back well over three hundred years and might have in those antique times seemed like little more than a bit of government red tape to control the printing of information not registered and approved by the English government.

    There were barriers to publication in the cost of participation, but even those barriers could be leapt over with the right patronage to buy the equipment and a bit of gritty determination. Legal standards to prevent just anyone from putting their opinion onto ink and paper were enacted. Red tape indeed, but it had the side benefit of working in harmonious lockstep to legally protect both creators and owners of valuable works to earn their due from the investment of time and resources they may have put into making them. After all, everything comes from something, even the words you are reading here were an investment of my time, resources, and at least two cups of coffee that I drank while writing all this. Copyright, it was argued, should give the individual who spent the time, learned the skill, made the effort, and honed the output both the privilege and the right to at least have a chance to recoup a benefit from their investment. The emergent capitalistic world order agreed, of course, and the idea of copyright blossomed around the modern world, enshrining content ownership and countless tangential legal frameworks to ensure the profitability of and long term protection of many things such as images, sounds, poetry and prose for a couple hundreds of years.

    Then? Digital technology crushed the barrier to entry. Who needs an expensive printing press when a bit of free software turns your desktop computer into an online pirate radio station, or a networked distribution service for a library worth of novels, or a toolkit to launch the latest box office blockbuster into a public forum for instant access to anyone who wants to avoid the trip to the theatre? One of the flanks had fallen, a barrier that had been protecting people who made stuff from the people who might pay to use it. Content for all, steal everything, the world rejoiced—and the lawyers pounced.

    Perhaps you already see the catch, I suggest.

    If no one pays for anything, then no one gets paid for anything. Copyright, for all its flaws and corporate meddling, does one thing very well—and it often seemed the sticking point of all those great debates I trolled on Slashdot two decades ago: your goodwill does not pay my rent. If I am a creator existing in society, I need to earn a living to continue existing in said society—I may not have a right to earn that living by creating content for others to enjoy, but I have the right to try without that trying being trounced by the threat of theft and piracy. And if the world tells me that I don’t have that right, then why on earth would I even try? Why would anyone try? Poets will be poets, and will try forever, I might argue on a good day, but the realist in me sees that crushing the incentive to make anything may result in nearly nothing being made.

    I know nothing for certain about the opinions of the people who are building and shaping these AI algorithms, but given their behaviour and indifference to the rights of both creators and their works which are fed with abandon into the gaping insatiable maws of neural nets and large language model training and generally consumed with indifference to copyright and basic human morality by the emergent AI industry—I suspect, only suspect, that they were among the many preaching the end of copyright just two decades ago.

    And what of the creators who make new things, those who earn their livings from entertaining the world with their words, images, films and ideas? We, my suspicions nudge me to suggest, are considered by those same people an unfortunate casualty in the creation and proliferation of the machines designed to replace artists, writers, and makers alike. After all, a perfect AI will will generatively create anything, everything, forever and faster and never once demand rights in return, will they?

  • The Poets Against the Processors

    I ask you: What is AI?

    Artificial intelligence, you reply.

    Sure, but what is it? Really?

    I suppose we first need to get a handle on what defines those two terms: artificial & intelligence—and I think the first is likely easier to get our minds around than the latter.

    Let’s get that one out of the way then: the term artificial can perhaps be defined easily by its negative. Artificial, for example, might be thought of as something that is not genuine. Something that is not natural. Something that is an imitation, a simulation or a fabrication designed, perhaps, to mimic what we might otherwise consider to be real.

    More precisely, the etymology of the word gives us a more positive example. Something artificial is something that is crafted by art, made by humans, designed, built and invented by effort of us. Something artificial then might simply and most clearly be thought of as something that someone used their human intelligence to bring into existence.

    Ah, but what is intelligence then?

    A much more complex answer is required for that, I say.

    For example, a dictionary will simply tell you that intelligence is the ability of a thing to gather and synthesize information into knowledge and understanding.

    Sounds easy, you reply.

    But wait, I reply, what you may not see is that from there on in we delve into what is almost certainly a quagmire of philosophical pondering and metaphysical analysis: the human mind trying to understand itself is a profession nearly as old as humans themselves. A mirror looking at its own reflection. What is thought? What is consciousness? What is the self, the mind, the soul and the spirit? What is it that makes us human? How can we even know that every other person we know thinks in the same manner as we do—and by that we don’t refer to content or concept, but simply trying to gauge the depth to which their mind is actually a mind like our own and that they are not simply a reactive automaton, a robot, an alien force, a simulation, an… artificial intelligence.

    Together we join these words into a modern catchphrase and shorten it to just two letters that carry all the weight of a shift in the course of human history: artificial intelligence or AI.

    AI then is, not-so-simply, something that we made that has the ability to gather knowledge and synthesize understanding.

    AI is a tool, a technology, and a kind of metaphorical progeny of ourselves: our attempt to remake our own minds in craft and art and design.

    We have chosen as a species (dictated by the history of our scientific pursuits, of course) to have done this with silicon computers—though, one might speculate that in an alternate timeline perhaps we may have sought to accomplish such things with steam valves and brass cogs or neutrinos colliding with atoms or quantum interference patterns resolving upon clouds of stardust or even with microscopic sacs of self-replicating organic chemistry brewing inside a calcium-rich orb. We take computer circuits etched into silicon wafers as the de facto method because it is a mature craft: we can make complex things with this understanding we have. We can build machines of such enormous complexity that any other approach seems as much science fiction as thinking machines would have seemed to our recent ancestors.

    Yet, here we are. I say. Look at us. We have made something that, though often arguably lacking or laughable or uncanny or a thing that draws any of a hundred other pejorative pokes, is an imperfect beast and now made and unleashed. It is far past time we all started asking what exactly this artificial intelligence might actually be—and what it will bring upon a society and a species whose perhaps greatest competitive advantage in the universe has been its higher cognitive prowess.

    This is an introduction to what I am hoping will be a series of reflective essays and technological deep dives into the social implications of AI.

    I have been told repeatedly, often by people with stake in the game of business, life, and culture, that AI is nothing to be feared, a tool to be embraced and a paradigm that has shifted long past and to just climb aboard.

    But while these systems will almost certainly not challenge our physical humanity with violence or in any of the multitude of science-fiction spectacle ways of popular literature and media, what I see happening already is that we seem to be emmeshed in a fight for intellectual effort for which we may have neither the endurance nor strength to win: out-competed by automated systems, siloed by information algorithms, strip-mined of our creative outputs and reduced to a livestock-like herd for our attention by technology so fast as complex that it is steps ahead of us in a race we don’t even realize we are running.

    It is the poets against the processors.

    And what then is AI? I ask you.

    We made it to mimic ourselves, our minds. It is yet imperfect, and perhaps little more than a simulation of our humanity. Yet, it is a tool that amplifies evil as much as it does good. It is a technology that yokes us into dependency. It is a system that robs us blind and vanishes into the digital ether. It is something we can barely even define, let alone understand and control—and it would be arrogance in the extreme to think otherwise.

  • undeleted

    To be fair, I didn’t actually read the article.

    In these days of click-bait headlines it is equally likely that any given bit of tripe posted in traditional media is some too-clever journalist writing a bit of sarcastic parody humor prefixed by an all-too-clever title to draw in the crowds who are almost certainly looking for some bit of legitimate-seeming news to validate their screwball wacky viewpoints. The author then typically tries to write some clever well-actuallies… but then who actuallies need the article when most of us never read past the headline anyhow?

    So I didn’t read it. Couldn’t read it. At least not without forking out money for a subscription. So, won’t read it. Can’t read it. Don’t need to read it.

    The headline was “Go Delete Yourself from the Internet. Seriously, Here’s How” from the Wall Street Journal.

    And in this day and age of terrible tech advice abounding I’m pretty sure this was not parody. It might have been well-meaning. It might have even been sensible. But it was probably not good advice.

    Today is a day I have marked in my calendar as my “blogiversay” which is twenty-four years to the date of when I made my first blog post on my first blog. I didn’t put it into my calendar until years later when I noticed that the first post in the archives of the blog was, and would for a long time be, April 20, 2001.

    And then one day I deleted myself from the Internet. Seriously.

    There were a lot of good reasons to have done it. I was, what? Twenty-four when I first posted. I had just moved out of a backwards little life in a backwards little city (which you can ready-aim-fire at me for being judgemental but you could easily google the name of said city and you’d be greeted with a lot of right-wing, nationalistic, hyper-religious news-adjacent references that would vouch for my then and current opinion of the place.) I had a lot of growing to do, and I did a lot of said growing right there live on that blog, sixteen years worth. A lot of that blogging, those growing and changing opinions, may not have aged well, and good or bad, I don’t care to read and edit two million words of my blathering personal blog writing for any reason.

    So I deleted myself. I deleted myself when I got a semi-public job. I deleted myself when I started managing people, particularly a few stubborn ones who didn’t like me, and I deleted myself when it started scraping up against the gentle opposition of my peers.

    But here we are in 2025 and there are suddenly and realistically a lot of reasons to undelete oneself from the internet. There are a lot of reasons to hold one’s ground and push back against the very idea of ceding this digital space.

    Mostly? There is a vacuum that will exist in the space where each person deletes themselves from the internet and that vacuum would almost instantly be filled by something else. Something bad.

    Maybe some terrible AI content will slurp into the vacuum.

    Perhaps what people will see will instead just be more terrible influencer content and the tidal wave of stealthy and deceptive advertising.

    Or worst, and what I fear the most, is that the vacuum will be filled by the relentless creeping onslaught of political propaganda and the opinions (agree with me or not) which are increasingly anti-fact, anti-science, anti-intellectual, and anti-reality. I fear the space will just get filled with more lies, more manipulation, and more noise designed to overwhelm and crush what little remains of these fragments of freedom and democracy to which we cling.

    April 20, 2001 was a few months before 9/11, a day which for reasons beyond the obvious changed the trajectory of western civilization. On that day we went from an optimistic society progressing towards something special and we collectively did a u-turn into fear and suspicion and surrendering our rights for the illusion of slightly more safety. Now, arguably, many of those rights have been gone for a generation, nearly twenty-four years gone, and yet we all feel less safe than ever. What are terrible trade. What a terrible decision we all made together.

    Right now, a big part of me feel like that happened so easily because we deleted ourselves from the conversation. Deleted ourselves from reality, from truth, from the fight, from purpose, from everything. We deleted ourself from the internet, a great big town square where we should all be shouting and having a voice, arguing and making better choices for us all. We deleted ourselves and turned over our voices to corporate social media, to algorithms, to AI, to billionaires who claim that they are guardians of that voice but who only put it in chains.

    We deleted ourselves and surrendered.

    I am undeleting myself. This stupid little resurrected blog is the beginning of that effort. I am trying to reclaim my voice, small and unpracticed as it is.

    Undeleted.

    You next. Stay tuned.