Category: technology & toys

I am a nerd by nature and a geek by trade, and I have a few things to say about all kinds of technology from enterprise platforms to playful games.

  • code monkey, one

    I have been writing code for nearly as long as I have been using computers—which, ugh, it sparks my nostalgic angst fuse to write it but that was in grade school in the nineteen eighties. 

    To that point, I have been coding increasingly more and more these last few years, and making more and more meaningful tools in code.

    I thought it was high time I started a reflective series of posts on the topic. 

    Oh, sure, you can toddle on over to one of my other blogs and read about the intricacies of my coding efforts when I choose to write about them. I am specifically referring to my game development blog where I was for a while simul-writing about the creative processes behind indie game design—but bluntly those posts tend to get into coding and design weeds quite deeply and are not everyone’s cup of joe. 

    Code monkey, part one then—and it begins with a wistful reflection on the recent overhaul of my Microfeed Applet. 

    Three years ago I was livid.

    I was so damn sick of the broken-ass nature of social media I set out to divest myself of participation on the platform which I had once loved and cherished, but which had betrayed my trust: Instagram.  Doesn’t that sound weird, to confess such adoration for a social media platform? Well, it was once a triumphant tool of personal expression and sharing. I could make comics or photos or art and spread them to friends and the world. It was like perfect digital self-publication tool made real and easy.  But those damn platforms do as those damn platforms are wont to do: they blurred the notion of customer and user and suddenly I noticed that I was no longer a customer, but just another user who flailed about in algorithmic hell of lost potential. 

    In reaction and protest, I wrote some code to upload my photos and text to my own server: 8r4d-stagram, I called it.  It kinda looked like a rudimentary version of Instagram, which back then was the whole point: if they are going to fuck up their platform, then I can just make my own. I can code personal projects, and it’s not like I was going to sell it so who cares how or who or what I replicated? 

    We went to New York a couple weeks later and there I used the new little photo posting system every day to post pictures from our trip. It was clunkier than Instagram, to be sure. Of course it was. It was essentially a home-brewed, web-based, beta-version of a billion dollar platform. It could never compete in real life, but it was good enough for me—and I took a lot of notes on what worked and what didn’t. QA on the fly, on the road.

    That was nearly three years prior to writing this post. In those years I have tweaked and improved the tool in fits and bursts, but improved it nonetheless. I have extended it, adapted it, fine tuned it and overhauled the guts of how it worked inside. I have added features, removed some of them days or weeks later, enhanced security, broadened the flexibility and made it work so much better than it did during that trip to New York trial period

    Code, after all, is one of those iterative efforts. A thing you make might never be done, so long as you can think of new ways of bending and blurring what you are trying to make it do, but then you can update it and improve it. That’s the joy.

    I have built hundreds of little programs over the decades, but only a handful have amounted to anything more than toys. My Microfeed Applet is one of those that has become in its own right so much more than a throwaway project.

    The last couple of weeks I have put my head back into the code and worked to push it even closer to maturity and even further from a simple Instagram clone. I reskinned the design. I added a menu system. I fine-tuned the back end code that you’ll never see but removes even more of the “clunk.” I refined the usability. All of this is not just in anticipation of another vacation trial period and me taking the tool to Japan to post our adventures in a few months, but because I am an iterative code monkey-type who thrives on continuously improving his tools, sharpening his blade, and enhancing his own skill. I use it. I learn from making it.

    And now that I have over a thousand posts on my own faux-social site, every code tweak it makes it easier to keep using it and not go back to broken-ass platforms.

  • head over feets, seven

    With the pool closed now, I have been a couple of things fitness-wise, frazzled and lazy. I mean, it is going to take me a couple solid weeks to find a rhythm and routine again, and one of those fail points is definitely reared up as my lack of logging of everything here. Yeah, I haven’t posted—not that anyone but me is checking, but accountability is accountability, even to oneself.

    So I figured I would do two things: (1) try for a running streak in September and (2) reset this log starting on last Saturday (back in August) when my running streak properly started. 

    So Saturday, huh? Yeah, I woke up feeling motivated and drove down to Park Run. I don’t want to say I’m a rare participant in Park Run but the August long weekend was only my second outing of the year and my ninth overall.  The vibe is that of a race, even though people have literally argued with me about this online, but for me it checks enough boxes—start and finish lines, timed results, online records, lots of participants—that I feel like I’m racing, so I try a bit harder. As it was I pushed myself and came in just barely under thirty minutes, good but not great, but still anything under thirty feels like I’m not completely out of shape.

    It doesn’t strictly count as a fitness point, but I ordered a pair of wireless waterproof bone conducting headphones. I’m a normal kind of guy, after all, and I like to listen to some tunes while I work out. I tried the whole fruit-based pod thing and they get sweaty and fall out, and I know the new ones have improved—but then I saw the waterproof wrap-around version (no brands mentioned) were on the reward points website and so, I’m like, that seems like an upgrade for my purposes. They are currently on order and I won’t say a done deal, but I expect them in a week or two.

    Sunday we met thirty minutes early for our Sunday run because the forecast was for hot, hot, hot by mid-morning—and it wasn’t wrong. We logged an honest eight klicks and tried to keep it to the shade.

    I figure that pretty much anything I log with my watch counts as fitness, so having logged a two hour paddle down the river with my wife and dog on Sunday afternoon I can say, yeah, kayaking down a river is a workout. I was tired like crazy that evening.

    I was feeling dedicated to my still-young run streak on Monday but the firesmoke had rolled in and we cancelled our traditional breakfast run meetup—in that we still went for breakfast but skipped the run for health reasons.  But I had a run on my mind and a rec centre pass in my wallet so I hit the track and logged a five klicks track run shortly after we washed up from dinner.

    Now, summer vacation proper is over, effective as I write this knowing my kid is off to school again, and my days can be a little more focussed on getting back into the fall and winter routine—and that includes some serious ramping up of my training. Stay tuned.

  • the emdash conundrum

    What is that mysterious double-dash and why isn’t it a red flag for inhuman writing?

    Maybe you’ve heard this one before: as I write these words there’s a post actively circulating written by some guy who can tell you “one simple trick” for spotting generative AI content online.

    “Look for the emdash!” he writes. “It’s a dead giveaway.”

    Thus we come to the problem of the emdash.

    Oh, you know what an emdash is, right? Oh, sure, you know—but that guy reading over your shoulder doesn’t so I’ll explain here so that he can keep following along.

    Simply, an emdash is just punctuation. 

    We use all sorts of punctuation in English writing and the kinds of punctuation one uses is often a matter of the form and formality of said writing. There are punctuations that get used to mark the end of sentences, say, most commonly periods, exclamation marks, and question marks. There are “quotation marks” both double and ‘single’ that call out words or phrases as a kind of contextual clue that these are someone else’s thoughts, words, ideas, or have broader meaning beyond the text one is reading. And then there are all sorts of helper punctuations that get used to help simulate the cadence of speech patterns like pauses and passes towards new ideas. These include commas, parentheses, colons, semi-colons, ellipses, and—you guessed it—emdashes.

    Emdashes are probably the least well known, and infrequently used of the bunch, and basically are just a double-dash. A single dash might tie a pair of words together, where an emdash might tie a pair of concepts or sentence-fragments together, and are often employed (at least as I have found) as a more informal version of the semi-colon, a hint to the reader that a conversational tone is implied as one reads and very much used like a pregnant pause or a “get ready for this” beat in the reading.

    And I’ll tell you what else: speaking as a writer myself, they are fun to use stylistically once one starts to think in that vibe and to think of someone reading the words on the page better matching the cadence in the writer’s head. Also, with modern variable-width fonts pretty much standard now, they make even more sense than even a decade or two ago with their less-relevant strict type-setting rules. In other words, people are using them a lot more these days, particularly for casual writing.

    One guess what is busy slurping up a lot of modern, casual writing these days and using it to emulate human conversational writing styles.

    Yeah—AI.

    So. Here we are at the emdash problem: when an increasing majority of content is rapid-generated by AI engines, and those engines are emulating the most modern of casual writing that they have pilfered and scraped from public websites, it was then almost inevitable that (a) those generated AI texts were going to use punctuation trends that are common in text that was written in the last decade and (b) any human reader who is used to more formal writing would immediately misidentify this less-common, human-mimicked punctuation as a red flag gotcha for generative text.

    Yet, it will never be so easy. Don’t let down your guard.

    I can tell you this with confidence because almost everything I’ve written (at least, written casually) in the last couple of years has made frequent use of the emdash as a stylistic choice. I like the emdash. I use the emdash. And your objection to my use of the emdash is no more valid than telling an artist they use too much blue paint or a musician that their choice of chord progression is wrong: these are stylistic choices and—fuck off, I’ll write how I want to write.

    People—human people—use the emdash and it is not a dead giveaway for anything, not even AI.

    Like everything else we see online, we need to be a little suspect and cautious: we now have the job to use our brains to unravel the source of authorship, and yeah—guess what!—there is no easy quick trick to deducing origin anymore. It’s a toss up if a human or an algorithm wrote it.

    The author of this particular viral meme accusation against all emdash-containing text is not entirely wrong. I mean, kinda mostly wrong, but not completely wrong. There will almost certainly be a trend towards greater use (and misuse) of emdashes in generative text for a while and for the very reasons I wrote above… but emdashes are no smoking gun nor flawless clue. They are but a single part of a complex profile of the origins of the modern written word, a profile that will get more complex as each day passes and more algorithmically generated content floods our feeds. We need to use our very human brains to detect these things and always be skeptical of sources and authors, but this means doing research to understand those sources by seeking to find profiles and consistent histories of the real people writing things, testing ideas against multiple perspectives, and shining sunlight on simple and stupid solutions to the complex problems we will face in the challenging of our own humanity.

    AIs didn’t invent the emdash, and insisting they did is an insult to the thousands of real humans who have adopted this as part of a stylistic toolkit and are trying to write interesting things in what is already an uphill battle against the processors.

  • Copy Wrongs & Rights

    Perhaps the only reason to bring up here the great copyright debates that permeated the internet in the early 2000s is one of idle speculation linked to a tangential theory.

    As digital media formats matured and before technologies were blessed by the often-corporate owners of the media encoded therein, piracy abounded. Discussions flared and festered online about the modern relevance of copyright in a world where art, music, film, and literature could be moved through networks in minutes and bypass the barriers of physicality once deemed a near insurmountable obstacle to such voluminous theft.

    My sideshow of choice was a tech site called Slashdot, which still thrives today to a great extent even as I write this, tho my own visits are rare. Within those comment feeds I more often observed, but occasionally participated in, a regular debate on this topic of copyright. “Copyright was nuanced. Copyright needed adjustment. Copyright didn’t understand the internet, and neither did the politicians policing the scramble to protect the people too slow to keep up.” There was seemingly no end to the nuance and clout of arguments that shaped the conversation there. Nor was there a shortage of participation across a broad spectrum of the digital entrepreneurial class seeking to ride the next wave of a hope for restriction-free content into a reshaping of every floor of the entertainment industry.

    My idle speculation and theory on the subject of the copyright debate arises when one considers that the very capital-G Generation calling for a digital uprising and an overthrow of century-old copyright rules in the first decade of the 2000s was, in fact, my Generation, specifically the geeks among us. We are twenty years older now and frequently found in senior-level jobs, managing corporations, or leading valuable technological projects on behalf of governments and business. It is only speculation, but I would not be surprised if nigh every leader in modern AI computing or any related discipline once had—and may still possess—a very strong opinion about modern copyright, its failings and perhaps its very relevance thanks to the so-called Napster years.

    And of course copyright is almost certainly to be considered a central sore point to many who are questioning the largely-unchecked progress of artificial intelligence algorithms today.

    What is copyright, you ask?

    Copyright as we know it today has roots dating back well over three hundred years and might have in those antique times seemed like little more than a bit of government red tape to control the printing of information not registered and approved by the English government.

    There were barriers to publication in the cost of participation, but even those barriers could be leapt over with the right patronage to buy the equipment and a bit of gritty determination. Legal standards to prevent just anyone from putting their opinion onto ink and paper were enacted. Red tape indeed, but it had the side benefit of working in harmonious lockstep to legally protect both creators and owners of valuable works to earn their due from the investment of time and resources they may have put into making them. After all, everything comes from something, even the words you are reading here were an investment of my time, resources, and at least two cups of coffee that I drank while writing all this. Copyright, it was argued, should give the individual who spent the time, learned the skill, made the effort, and honed the output both the privilege and the right to at least have a chance to recoup a benefit from their investment. The emergent capitalistic world order agreed, of course, and the idea of copyright blossomed around the modern world, enshrining content ownership and countless tangential legal frameworks to ensure the profitability of and long term protection of many things such as images, sounds, poetry and prose for a couple hundreds of years.

    Then? Digital technology crushed the barrier to entry. Who needs an expensive printing press when a bit of free software turns your desktop computer into an online pirate radio station, or a networked distribution service for a library worth of novels, or a toolkit to launch the latest box office blockbuster into a public forum for instant access to anyone who wants to avoid the trip to the theatre? One of the flanks had fallen, a barrier that had been protecting people who made stuff from the people who might pay to use it. Content for all, steal everything, the world rejoiced—and the lawyers pounced.

    Perhaps you already see the catch, I suggest.

    If no one pays for anything, then no one gets paid for anything. Copyright, for all its flaws and corporate meddling, does one thing very well—and it often seemed the sticking point of all those great debates I trolled on Slashdot two decades ago: your goodwill does not pay my rent. If I am a creator existing in society, I need to earn a living to continue existing in said society—I may not have a right to earn that living by creating content for others to enjoy, but I have the right to try without that trying being trounced by the threat of theft and piracy. And if the world tells me that I don’t have that right, then why on earth would I even try? Why would anyone try? Poets will be poets, and will try forever, I might argue on a good day, but the realist in me sees that crushing the incentive to make anything may result in nearly nothing being made.

    I know nothing for certain about the opinions of the people who are building and shaping these AI algorithms, but given their behaviour and indifference to the rights of both creators and their works which are fed with abandon into the gaping insatiable maws of neural nets and large language model training and generally consumed with indifference to copyright and basic human morality by the emergent AI industry—I suspect, only suspect, that they were among the many preaching the end of copyright just two decades ago.

    And what of the creators who make new things, those who earn their livings from entertaining the world with their words, images, films and ideas? We, my suspicions nudge me to suggest, are considered by those same people an unfortunate casualty in the creation and proliferation of the machines designed to replace artists, writers, and makers alike. After all, a perfect AI will will generatively create anything, everything, forever and faster and never once demand rights in return, will they?

  • The Poets Against the Processors

    I ask you: What is AI?

    Artificial intelligence, you reply.

    Sure, but what is it? Really?

    I suppose we first need to get a handle on what defines those two terms: artificial & intelligence—and I think the first is likely easier to get our minds around than the latter.

    Let’s get that one out of the way then: the term artificial can perhaps be defined easily by its negative. Artificial, for example, might be thought of as something that is not genuine. Something that is not natural. Something that is an imitation, a simulation or a fabrication designed, perhaps, to mimic what we might otherwise consider to be real.

    More precisely, the etymology of the word gives us a more positive example. Something artificial is something that is crafted by art, made by humans, designed, built and invented by effort of us. Something artificial then might simply and most clearly be thought of as something that someone used their human intelligence to bring into existence.

    Ah, but what is intelligence then?

    A much more complex answer is required for that, I say.

    For example, a dictionary will simply tell you that intelligence is the ability of a thing to gather and synthesize information into knowledge and understanding.

    Sounds easy, you reply.

    But wait, I reply, what you may not see is that from there on in we delve into what is almost certainly a quagmire of philosophical pondering and metaphysical analysis: the human mind trying to understand itself is a profession nearly as old as humans themselves. A mirror looking at its own reflection. What is thought? What is consciousness? What is the self, the mind, the soul and the spirit? What is it that makes us human? How can we even know that every other person we know thinks in the same manner as we do—and by that we don’t refer to content or concept, but simply trying to gauge the depth to which their mind is actually a mind like our own and that they are not simply a reactive automaton, a robot, an alien force, a simulation, an… artificial intelligence.

    Together we join these words into a modern catchphrase and shorten it to just two letters that carry all the weight of a shift in the course of human history: artificial intelligence or AI.

    AI then is, not-so-simply, something that we made that has the ability to gather knowledge and synthesize understanding.

    AI is a tool, a technology, and a kind of metaphorical progeny of ourselves: our attempt to remake our own minds in craft and art and design.

    We have chosen as a species (dictated by the history of our scientific pursuits, of course) to have done this with silicon computers—though, one might speculate that in an alternate timeline perhaps we may have sought to accomplish such things with steam valves and brass cogs or neutrinos colliding with atoms or quantum interference patterns resolving upon clouds of stardust or even with microscopic sacs of self-replicating organic chemistry brewing inside a calcium-rich orb. We take computer circuits etched into silicon wafers as the de facto method because it is a mature craft: we can make complex things with this understanding we have. We can build machines of such enormous complexity that any other approach seems as much science fiction as thinking machines would have seemed to our recent ancestors.

    Yet, here we are. I say. Look at us. We have made something that, though often arguably lacking or laughable or uncanny or a thing that draws any of a hundred other pejorative pokes, is an imperfect beast and now made and unleashed. It is far past time we all started asking what exactly this artificial intelligence might actually be—and what it will bring upon a society and a species whose perhaps greatest competitive advantage in the universe has been its higher cognitive prowess.

    This is an introduction to what I am hoping will be a series of reflective essays and technological deep dives into the social implications of AI.

    I have been told repeatedly, often by people with stake in the game of business, life, and culture, that AI is nothing to be feared, a tool to be embraced and a paradigm that has shifted long past and to just climb aboard.

    But while these systems will almost certainly not challenge our physical humanity with violence or in any of the multitude of science-fiction spectacle ways of popular literature and media, what I see happening already is that we seem to be emmeshed in a fight for intellectual effort for which we may have neither the endurance nor strength to win: out-competed by automated systems, siloed by information algorithms, strip-mined of our creative outputs and reduced to a livestock-like herd for our attention by technology so fast as complex that it is steps ahead of us in a race we don’t even realize we are running.

    It is the poets against the processors.

    And what then is AI? I ask you.

    We made it to mimic ourselves, our minds. It is yet imperfect, and perhaps little more than a simulation of our humanity. Yet, it is a tool that amplifies evil as much as it does good. It is a technology that yokes us into dependency. It is a system that robs us blind and vanishes into the digital ether. It is something we can barely even define, let alone understand and control—and it would be arrogance in the extreme to think otherwise.