Yet Another Chapter

Another four months, another change…

Well, Summer Term is upon us, and for once I have something school related to do.

Though since I’ll be receiving my Master’s Degree by the fall, this could very well be the last university related thing I ever do.

We’ll have to see.

Anyway, in a few short hours I will be starting my summer internship (the final requirement for my Public History degree before I can graduate) at Fanshawe Pioneer Village, located in the city of London, Ontario. Fanshawe Pioneer Village recreates the early life of settlers in Middlesex County, and was set up in the late 1950s, making it a rather early historic village for Canada.

I’ll be working to help preserve and catalogue several collections that have been stored in perhaps not optimal conditions, while also working with the public and on an organizational project within the Village’s archives.

I’ll have more on all of that later, so stay tuned!

One of the Village People,

Scott W.E. Dickinson

Wrapping Up WALL-E II

Well, that’s it for the ol’ robot.

We had a little presentation last week of all the projects our class had been working on. They certainly seemed impressed by WALL-E and I’s little back and forth. More on that further down.

I promised in my last post a more in-depth discussion of how I programmed my WALL-E animatronic, and here it is.
Now that the program (or ‘patch’, as Max 7 likes to call them) is fully completed, I can finally show it and explain how it all works.


Here you can see just how large this patch has grown. All this is required for a show lasting about a minute- and that’s with me providing most of the dialogue! WALL-E II mostly reacts to what I say- or, rather, I time my questions so that his movements appear to correspond to what I ask. Voice recognition was, I felt, perhaps a little too ambitious for this project. Since the patch is so large, I will break it down into four sub-sections, to make the explanation a little easier.

servo 1

This is the first section of the patch. Here you can see how the entire program is activated, and the first set of WALL-E’s movements. No.1. is a keyboard input, which allows Max to register which keys are being pressed on my computer. The Indicator connected to it shows which key Max is currently detecting. Each key on the keyboard is assigned a number (even the letters. confusing, no?) and the four numbers on No.2 (28, 29, 30, 31) represent the four arrow keys, specifically left, right, up and down, in that order. As you can see, four connecting lines leave the route function and move further down into the patch. Each one is connected to one of WALL-E’s four sets of movement commands. The route function allows the input from the key function to be sorted based on what that input is. As you can see, when the left arrow key (which has 28 as its corresponding number) is depressed, route sends that signal to the button that you can see on screen at No.3.

The button is one of two switches that I experimented with. The button, like its real world namesake, stays active for only as long as someone is acting upon it while the toggle (not shown here) is like a switch- it stays flipped even after you walk away. Since I only needed to ‘jump-start’ each sequence of actions, I used a button.

No.3 is a counter function. This particular counter has been set to count to seven. It will count to seven (0,1,2,3,4,5,6,7), looping back to zero after seven is reached. When controlled by a button, the counter counts forward once for each ‘press’. With a toggle, it would continue to count forward without ending until the toggle was switched off. Since I wanted to have to use the keyboard as little as possible (so as to sustain the illusion that I was not directly controlling WALL-E), I preferred the button. If you look at No.4, the line function has two sets of connectors coming from its outputs. One set travels down to the bottom of the patch, where it connects to what actually outputs the control signals to the servo control board on WALL-E himself. The other set of connections returns to the counter function above. This allows the patch to send each set of instructions automatically, without requiring an operator to press a key after each movement.

It works like this. Normally, the counter is resting on seven. With no inputs from anywhere, this is a neutral state. Once the operator presses the corresponding key, the counter resets to zero. You will have noticed the route function connected to this counter. When the counter resets to zero, it sends a signal to the button connected to hit. When this button is hit, it momentarily activates the ‘message’ attached to it. Each of these messages (the rounded gray rectangles with numbers in them) is one of the servo commands. The first two numbers represent the start and end positions for the servo and the third is the amount of time this movement will take in milliseconds. For reference, 1000 milliseconds is 1 second. This command is then passed through the line function, which helps smooth out the movement of the servos and down to the servo output functions. A command passing through the line function sends a signal through the connectors leading back to the counter, causing it to jump to the next number, which is 1. This, of course, triggers the commands connected to 1. The counter can no longer rest on any number besides seven (which does not have any actions connected to it) since any other number will trigger the counter to move forward until it hits seven, which is now its rest state.

There are two line functions because they each control a different servo. One controls the vertical movement of WALLE’s eyes, the other the horizontal movement of his head.

Servo 2

Here’s the second part of the patch. No.1,2 and 4 are very much the same as the last one. This button corresponds to the right arrow key and the counter goes to 10 since this is a slightly longer sequence. No. 3 is the big change here, since this is a new action, unrelated to the servo movements. It is actually very straightforward. It consists of an mp3 sound file, an audio output function and a toggle (a button doesn’t work here- I need the file to play all the way through) connected to the route function. When the counter reaches the right number, it activates the sound, which plays through my computer’s speakers. You may notice that there is more than one connector coming from the same route output. This is so that WALL-E can move his head in time with the sound file, so to make it look like he is speaking to his audience. Also note that not all the messages have the same duration time. If I want WALL-E to perform an action quickly or slowly, I must alter the timing until the movement emulates the feeling I want him to convey. Also, one can connect multiple lines from several route outputs to the same button so that WALL-E performs the same action several times. I found this very useful when I wanted him to shake or nod his head.

servo 3

The third part is not very interesting. Note the gaps in the route function output. I found, while programming, that when I have both servos performing actions simultaneously each line function sends a signal back to the counter, moving it ahead two spots instead of one. When I tested this section, I found that the counter would automatically roll over as too many signals came in, putting it in a permanent loop. I solved this by making the counter go to a higher number (it was originally seven) and spacing out the lines on the route function. Now it goes only to ten and stays there until I reset it manually. At the bottom of this section there is yet another set of line functions, which will join all the rest at the bottom of the patch.

servo 4

Here’s the bottom of the patch. It’s a been a bit of a ride, hasn’t it? Here we see the last set of line functions (No.1). They, along with the other six we’ve seen, connect to that set of indicators that tell the operator what the positions of the servos are (No.2). From there the signals travel to the output functions, which were pulled directly from the servomotor patch that comes with every copy of Max 7 (No. 3 & 4).

Four sets of line functions may seem a bit redundant, but they were necessary. First, this patch grew so large that rather than try to scroll through it to connect all together in one place, I created several pairs of line functions to act as local ‘outlets’ that went directly to the servo outputs. That way, I didn’t have to thread all my lines down to the bottom of the page. Given the number of commands, this would have been messy, hard to inspect and impossible to troubleshoot. As well, each counter required a separate input. If all four counters wee attached to the same line four would have activated at once, sending the servos a confusing array of conflicting commands.

I’m reasonable happy with how WALL-E turned out. For my first attempt at something like this, I feel that he was a success and that his movements added something to the little ‘talk’ I had with him. If I ever rebuild him, or make another animatronic, there are a few things on my wish-list. The first is a great range of motion. I was very pleased with how expressive the movement of his eyes was, but I found that WALL-E could only really turn his read to the left, not to the right. In the future, bigger servos, or ones with great ranges of motion would be preferred. I also wish that I had time to design a few more servos into WALL-E. My original plan from January included an arm that WALL-E could wave up and down. Thanks to time constraints, I decided to leave the arm (which I had made) off the final body. As well, I would have liked to put a motor controlling some eyebrows (for added expressiveness) on WALL-E’s head, but the motor for his eye movements bent his head frame badly by itself and I did not feel it could stand another motor. A longer show, of course, goes without saying, but since this was meant to be a demo, I feel a minute of showtime was adequate. I had also hoped to install a few LEDs in WALL-E, to flash on and off with his speech. This was to make it more clear to the audience who was ‘speaking’ when the audio files played. In the end, that turned out to be unnecessary- WALL-E bobbing his head was an obvious enough effect. To sum up, WALL-E II was a good demo and testbed, and as a first try I feel he was very successful. If I ever try to make another animatronic, a sturdier frame, a few more servos and more time to play around would be very nice!

Why Use Animatronics in History?
In closing, I’d just like to point out a few words about how something like WALL-E II could be useful in an exhibit. My presentation consisted of myself asking WALL-E questions, his answers and my responses to those answers. What made it work, despite WALL-E having only two lines, was that we made it interesting. WALL-E ignored me, had to be woken up, refused to say his name (at first) and looked away from me while I tried to speak with him. The net result was that it seemed my own creation was not interested in listening to me at all. This is maybe a rather strange version of interactivity- the animatronic is not really responding to what I say, but it certainly looks like I am to the audience. Though I was the middleman between WALL-E and his audience, the show was nevertheless for them, and involved actors who were decidedly non-human. This, as I’ve argued before in this blog, is the usefulness of the animatronic in designing exhibits. The animatronic can be a rare bird, a wise old tree, a replicate of an artefact that comes to life before one’s very eyes (imagine being able to listen to a totem pole tell it’s own story). It may only be the illusion of life, but we can make it seem pretty real.


Keepin’ it Unreal,
Scott W.E. Dickinson

(P.S.- this blog will be undergoing yet another change as Summer term starts- tune in near the beginning of May to find out why!)

Gilderfluke and Max 7

It Lives!

Unfortunately, WordPress won’t allow me to upload the short video I’ve taken of my animatronic, but I’ve entered the final stage of creation for my project: Finishing the program for it’s little show.

One of my classmates has noted my animatronic’s resemblance to a low-budget version of WALL-E, the robotic star of PIXAR’s movie of the same name. WALL-E is a character of few words, which is perfect, since my animatronic will only be capable of saying a few words! Or, rather, I only have time to program him to say a few. So, my previously unnamed animatronic will be christened, however belatedly, as WALL-E II.

I found, when fiddling around with Max (more on that below) some interesting ways to get my program to unfold. At first, each new movement of my animatronic was controlled by a small clock (or metronome, as Max calls it), each step in the program following the last at a set interval. This was not exactly ideal. Since each new action was triggered at the same interval as the last, no action could last longer than what the metronome was set at. Actions that took less time would cause the animatronic to move, then pause noticeably before the next set of instructions were delivered to the servos. Since I needed my animatronic to pause and move at different rates, this would not work.

A different solution presented itself. I needed each action to follow the last right after the previous had ended. Was there a way to do this? Luckily, yes. By removing the metronome and connecting the output of the servo instructions back to the counter (which was what moved between steps in the instructions), the end of each action would trigger the next one in the series. This was far more satisfactory. Now, I could have an action that lasted several sections, followed by several which lasted only fractions of a second. In short, my animatronic is now capable of far more nuance and expression than he was before.

A few weeks ago I had felt I had reached a roadblock with programming. It was very easy to control a single servo directly, and giving it instructions via Max was not difficult at all. My main problem was in commanding more than one servo at a time, which I found difficult to do. Max seemed to only want to control one servo at a time. It turns out that there was a simple fix available, and my professor would supply it to me, but in the meantime I thought it necessary to look at some other software that might do what Max wasn’t.

Hence Gilderfluke. Gilderfluke and Company is a firm that designs and manufactures entertainment electronics, including animatronic controls and components. They do a lot of work of Disney, Universal Studios and all the other big names in animatronic entertainment. And almost all of their stuff is far out of my price range. Except for a bit of free software that Gilderfluke offers to hobbyists. I downloaded this free software and found that it was far too detailed for my little project. Although a comprehensive program, Gilderfluke PCMACS (the name of the program) was not exactly beginner-friendly. It took me the better part of an hour to find what I think is the programming tool. It is immensely detailed, and with it one could program an entire show’s worth of animatronic characters.

It’s also much less intuitive than Max 7, and rather less user-friendly (PCMACS was originally designed in 1999, and it shows). At the point when I realized that PCMACS would be almost unuseable in the time frame I had to learn it, my problem with Max was solved. Good thing Gilderfluke just gives this program away.

If this explanation was unclear, don’t worry. My animatronic will be complete next week and soon after I put a complete after action report about it, including an illustrated guide to the patches in Max that I used (as well as some developmental ideas, and stuff I would’ve like to implement if I’d had another month or two).

WALL-E’s talkin, but I wouldn’t expect walkin’.
Scott W.E. Dickinson

A Progress Report

We’ve got a robot, folks.

Or, at least the beginnings of one.


Not exactly the greatest face out there, but once I get the LEDs in there it’ll be (hopefully) a bit more expressive.



As you can see, he uses servos to move (check in again soon for a post where I demonstrate the current range of motion). One servo controls the head’s left and right movement, while the other nods the eye assembly up and down. I’m hoping to add some more movements soon. The Phidget servomotor control board that interfaces with my computer is located inside the body itself.


Here’s a shot of the full figure as it is now. I hope to cover some the k’nex body work with some decorative cardboard to make it look more like you average movie robot, non-functional control panels and all.

Check Back Soon for some Movement!
Scott W. E. Dickinson

Further Thoughts on Animatronics

Or, more thinkin’ on the Lincoln

Creating an Animatronic show is hard. This should come as a surprise to absolutely no-one. Disney started the whole business and they still create the greatest displays and the most immersive experiences. Doing this takes a great deal of money, teams of experts and lots and lots of time.

I don’t really have any of these things. Then again, my goals are rather less ambitious then what Disney puts out.
I had originally hoped to do a very short show with a single, basic animatronic. Looking at a number of hobby sites for people interested in this stuff gave me a couple good ideas on how to make an expressive animatronic using just a few servo motors- turns out that being able to tilt your head adds immensely to your expressiveness.

The key word for this project is limited. I’ve never done anything like this before- which is good!- but I have very limited resources to work with, which limits what I can do. I should also limit my plans for the ‘show’, such as it is. A couple of minutes of dialogue should really be sufficient for what I have in mind. The standard which I’ve seen from experienced hobbyists, who know what they’re doing, is that it takes about an hour to animate one minute of movement. That’s for people who know what they are doing, with specialized software to control the servos. Believe it or not, there are several companies willing to sell this software (and the accompanying hardware some of the programs require) to hobbyists, but their prices are … optimistic. Rather expensive for a one time project, and besides, the purpose of this assignment is not to spend money, but to think and create. I’ve already got Max 7, which is designed for controlling art installations. If it can do that, then hopefully I can get it to sync a few servos (at this point, my still not-finalized design call for four separate servos. More than that would be probably more than I could handle)

As for the Animatronic show itself, I feel that my original idea- to make a cardboard version of Abe Lincoln (or some other famous historical figure) some other right out of Disney’s Great Moments with Mr. Lincoln may have been slightly impractical. I thought of doing perhaps a famous Canadian figure- to my knowledge, there are no animatronic Prime Ministers (hop to it, DisneyWorld).

I had trouble finding a suitable person, however. There are no famous, defining speeches given by Canadian leaders. Sir John A. Macdonald has no defining moment. Nothing that William Lyon Mackenzie King said has been remembered very well. This makes it hard to find anything to make a show out of.

Also, I am not sure that a ‘show’ is quite the right direction to go in. Your average exhibit is not something to which you go and watch for 15 minutes straight. Most exhibits are meant to be moved through. An animatronic, sit-down show has perhaps too much of the theme park in it. However, not all animatronics are in shows.

Theme parks are so called because they do not just have rides, but ambience. A number of rides at Disney and Universal make use of show elements in the queues for the rides themselves. A favourite of mine is Star Tours. Before boarding your ship for a voyage through space, the line winds its way through the repair bays of the eponymous spacefaring company. There, those waiting in line encounter the company’s staff of wisecracking maintenance robots, hard at work repairing the ships. These animatronic figures speak and talk to the audience- but on prerecorded loops. These loops are long enough that unless it is extremely busy, you will have moved on before the figures reset. This sounds like something that could be translated into an exhibit. Figures that come to life for a few moments to greet guests or to provide greater life to a diorama seem like they would be far more welcome at a museum than a large sit-down show. Much less expensive, as well.

That’s the direction my project is heading in- a figure that one might encounter while waiting in line or while walking through an exhibition- someone that one stops and watches for a bit, rather than something that becomes the star of a stage show.

Staying out of the (Mechanical) Limelight

Scott W.E. Dickinson

Automatons and Animatronics

I’m a bit of a Disney nut.

That’s a statement that needs a bit of qualification, I feel.

Although I do enjoy the Disney Company’s films (though they weren’t as much a part of my childhood as they were for some people), I find that their greatest expression of showmanship is in the Disney World Parks. To those who have been there, and even to those who haven’t, the Magic Kingdom is known for the amazing quality of its rides, which focus less on thrills and more on the experience.

Almost all of their rides involve Disney’s “Audio-Animatronics”, those life-like speaking, moving (sometimes walking) figures that populate rides like the Jungle Cruise, Pirates of the Caribbean or the Haunted Mansion. For those who know me, it’s no surprise that my favourite ride is the Haunted Mansion. It’s a masterful combination of exquisite storytelling and incredible special effects.

It is, in fact, the last ride that Walt Disney worked on before his death, though he did not live to see it finished (That honour goes to the Pirates of the Caribbean, which opened in 1967). These rides are populated entirely with mechanical (in Walt’s day) and now electronic figures that are extremely life-like in their movements (even when they are supposed to be ghosts).

Most of these Audio-Animatronic figures repeat the same, short sequence again and again- each rider will only see one for at most a few seconds, and it is important that Disney gives their millions of guests experiences that, if not identical, are at least comparable.

This does not mean that there are not rides where a single figure holds the audience’s attention for a prolonged period.

Great Moments with Mr. Lincoln is a long-form sit down attraction. It involves no drops, loops or turns- there isn’t even a fancy lightshow! Yet it performs, and has performed, to great crowds for decades. It is the most famous part of the Hall of Presidents, where Audio-Animatronic versions of all American presidents (including a Robo-Obama) reside.

Disney is always pushing the envelope on what can be done with Audio-Animatronics. Though most Animatronics cannot walk under their own power- they are mostly bolted to their environments, and rely on external computers and hydraulic systems to work- Disney has pioneered a number of moving Animatronics which can move about on their own and interact with guests, although they are really electronic puppets controlled by a human operator.

Audio-Animatronics themselves spring from the much older concept of the automaton. Automatons, clockwork mechanisms built to imitate human and animal life. The most famous (and most complex) ones were mostly built in the 18th Century, many of them for the decadent courts of the French monarchs.


The Turk, pictured above, is actually not a true automaton. When originally unveiled, it toured the Royal Courts of Europe, where this man-sized mechanism played chess (and often won) and even answered questions via a Ouija board. Needless to say, it was an extremely clever hoax- one that was not solved until the 19th Century, but in the climate of the time, when real automata could draw, write, sing, dance and even seemingly eat, one that could think was just another mechanical marvel. Mechanical imitators seemed almost real.

The term ‘Audio-Animatronic’ itself refers to the robots built by Disney- An Animatronic is any motorized figure, and the ‘Audio’ refers to the fact that the figures are programmed by sound- to ensure absolute synchronicity, the ride’s soundtrack activates the Animatronics.

Obviously, this is all a bit beyond me, but the idea of a mechanical figure- perhaps not a human one, but an animated figure- has great appeal. A surprising number of museums do make use of Animatronics- it’s a great way to bring to life historic figures in a way that does not involve a screen. You can have Benjamin Franklin in the room with you- in some cases he can even respond to your questions. Animatronics can bring to life Dinosaurs, Dragons and statues. Imagine museum guests talking to a Totem Pole form the Pacific Northwest, or being told about Ancient Egypt by a Sphinx.

I think Animatronics can work best when they bring to life what isn’t human- perfectly replicating human facial expressions is difficult, even for Disney, and smaller companies cannot possibly handle it. However, a talking tiger or a posing statue are already unreal- no one can say that the tiger lecturing to you doesn’t look right- when’s the last time you’ve seen a talking tiger?

I’ve got to design an exhibit of some sort in the near future- and I think that some simple animatronics can make an appearance. They don’t have to be particularly complex- Walt himself had great success with a few pnuematic valves- but I think bringing something to life would be amazing fun.

A Pirate’s Life for Me,
Scott W.E. Dickinson

A Change, not a Rest

Hey Followers (and I know who you are)

I’ve been rather quiet for some time on this blog (Testing, Testing, my last entry was a failed attempt at coding for a now completed project, rather than a proper entry), but expect some more activity in 2015.

The first (and original) reason for this blog is no longer relevant. The course for which I created this wordpress account is, well, over. However, you can expect more from me on the digital front, as I’ve enrolled in a new course and will be, in the coming months, talking about digital exhibit design.

Are you excited? I am.

Totally Didn’t Fail The Last Class,
Scott W. E. Dickinson

Battles are Easy, or Feelings and Facts


Now that a fun word, right? To say that something  has verisimilitude is to say that it has the appearance of reality- that it feels true, in other words.

Note that distinction. The something feels right does not mean that it is right. As any physicist can tell you, common sense notions are often the most wrong.  Verisimilitude simply giving something the impression of being real. It can be convincing, but it doesn’t have to be true.

My favourite example of this sort of think come from the world of set design.

Greebles, despite sounding like a particularly unfortunate children’s breakfast cereal is the technical term for all those little bits and pieces that model designers put on their spacecraft a la Star Wars


All those cubes and boxes and pipes and doohickies that cover the spacecraft’s hull don’t do anything- but they look like they might. The purpose of all this is to convince the viewer- that is, us- that this is a great big complicated machine, and nothing says complex like a very busy mechanical landscape. A perfectly smooth surface would probably look even more futuristic, but that’s best left to those super advanced evil aliens who show up around Act 3. This is verisimilitude- it’s the act of seeming. You can’t help but say “Wow, that looks real!”.

For everyone who isn’t into fictional spacecraft, I present another example: Forced Perspective. Theme Parks use this all the time. By changing the scale of building or structures as they move further away from you, Disney can make their themed environments larger than life.


Look carefully at these buildings (From Main Street U.S.A., Disneyland). It appears to be a three story shop, though only the ground floor is tall enough to admit adult humans. The rest is an (increasingly) underscaled mockup, designed to make you, the visitor, feel smaller, and hence more child like. World of Magic, Eh?

It works in the reverse as well.


This two story building is actually more than five stories tall.


And only Disney can make an Everest easily traversable by train.

The point is that we can given someone an experience- one which they can indeed enjoy vividly and undergo what seems to be real- which leaves a lasting impression on them, without it being necessarily real.

This is where the digital portion comes in, folks.

Historical Computer games are not perhaps the best way to learn history. There’s been a lot of arguing back and forth over the relative merits of games as teaching devices- can they aid? Do they affect people at all? Is there value in their messages at all?

I’m not really worried about that. A video game is more or less an interactive movie. You not only get to watch unrealistic people do impossible actions, but you get to control them as well! Perhaps this isn’t the best medium for a deep reading into, say, the economic effects of the Crimean War.

It is, however, a great medium for getting the feel of an age. I am thinking specifically of the newly released (as of this writing) Assassin’s Creed Unity, a game which takes place in Paris during the French Revolution. Are the characters real? Nope. Is the plot a historical narrative? Nope. Is the backdrop real? Oh yes.

The only games that can really pull off guiding the player through a historical event are war games, where, for example, you could play both sides at Waterloo, or mess around with tanks in North Africa. At least battles have fairly simple objectives that can be quite historically accurate. You play as Napoleon- your object is to re-play and win all these battles that he won- very straight forward.

But how does one model situations that don’t involve guns? How do you make a game that simulates the life of a British farmer, circa 1750? Or life on the home front of WWII? Or just surviving during the Great Depression? Storylines that would make sense in these settings would probably not involve a lot of gunplay or car chases and thus don’t exactly appeal to most folks who are buying games.

Games can, however, have very accurate backdrops to their unrealistic gameplay. Unity may be about a fictional assassin, but it’s about a fictional assassin running about and exploring a very real city during a very real period. Players can, while they kill, pick up something of how that period felt. The crowded streets, the strange, medieval layout of the city, the anger of the people, the uncertainty of the times can all be easily expressed. Actual names, dates, and facts are unlikely to be espoused by the game (or remembered by the players), but the sights and sounds of Paris will stick.

The game does do this- we see dirty streets and maze-like alleyways, foppish nobles hiding from angry crowds, impromptu trials and sudden executions and the player’s character is involved in all of it.

It’s all about seeming. The best way to understand historic clothing is to wear it. The best way to understand old cars is to drive them. Since we cannot go back in time, the best way to experience an earlier period is to simulate it. What could be better than a simulation where you can interact with your surroundings? Unless a museum is willing to shill out truly enormous funding, the best simulations are going to come from video game engines.

Just like Expedition Everest impacts the size of Everest while being much smaller, or Main Street USA invokes nostalgia for a childhood may people did not have, well done video games set in historical periods will probably not replace textbooks, but they will help players visualize what the past looked like.

Now to find a game about 19th Century Lumberjacks,

Scott W. E. Dickinson


New Ways of Thinking?

Paradigm Shifts and Technological Alarmists

In Nicholas Carr’s  Is Google Making Us Stupid? the author worries over something which has certainly made the rounds over the years:

“Is this [Insert New Technology of Your Choice] making [Our Youth Disrespectful/Milk go Bad/Destroying Western Civilization/ Incouraging Immorality]?”

Admittedly, Carr’s article is not exactly current, but it’s a fear that we’ve all heard many times from many sources: Is the Internet making us less intelligent?

As a misanthrope, I’m obligated to point out the vast majority of our species isn’t exactly the brightest bunch of primates around, given the number of absurdities which I’m sure my readership can name. I digress. I’m not here to insult the human species- to do that, simply turn on the major News outlet of your choice- but to talk about this fear of the Internet. It’s really just the fear of the new.

To those that know me well, they’re probably pointing out that I’m not the most forward looking guy. I actively resist using my cell phone. I prefer to remain unconnected and wire-free. Where others use an Ipod, I whistle. Where others check bus times with their tablets, I pull out a book and wait for the bus.

Saying all this makes me sound like a hipster, trying to be contrary simply because it makes me unique. Not true. I’m contrary because I’m  as mean as a snake, not because it makes me different. I do appreciate modern technology- heck, I’m online right now, aren’t I? I require the Internet to both do my job and to talk to friends, have fun, connect with others and all else that we do online. I simply feel that when I’m not in front of my computer, I should be interacting with the real world without any distractions whatsoever.

But enough about me. New technologies, indeed, new ideas, new ways of doing things, new tools, new concepts- have changed how we think. This is nothing new.

The advent of agriculture, way back in the dawn of history, altered far more than the Internet has. Permanent homes, a class system, financial inequality (and the concept of money- abstract representations of wealth), writing, reading, precise calenders and many other things- are all the children of the rather straightforward concept of ‘grow enough surplus food to get us until the harvest next fall”.  When we changed from hunter-gatherers to farmers, suddenly we had new things to think about- how to plant, harvest and store, how to build, how to protect what they had made, and most importantly, how to plan, and who should do the planning. Every complex society has been a settled one for a reason. There’s no need for a king when you’re a wandering tribe.

My point here is that new tools and ideas begat more tools and more ideas as well as a thousand new civilizations. It was an end to one way of life, and the beginning of another.

We see this repeated with many technologies. Faster travel creates new understandings of far off places- it also necessitates new developments in navigation, time-keeping and construction, while altering markets and remaking economic patterns. This is not always a good thing.

Certainly Google is not the first innovation to alter how we think. The clock altered how we thought about time- For a very long time, the seasons were how most people  watched the calender for. When you are a farmer, knowing that it is Tuesday isn’t very important. Knowing that next week is usually when the frosts come, so you’d better harvest your wheat now, is vital. Clocks sliced up days into hours (at first- most early clocks were not reliable enough for a minute hand to make sense, let along something as precise as a second hand) and suddenly, you could be late for an appointment! Hard to be rushing across town to your 2 o’clock when there is no clock. Hours and minutes are human-created divisions, and yet they seem to control us.

The same with the auto mobile, the typewriter, the telephone, the computer and a thousand other devices. This is what annoys me about Carr’s article. Our lives have been shaped and moulded since the moment we were born, by our culture, our language, our tools. People who speak other languages, who are part of other cultures, who lived in different eras, who, indeed, lived very different lives, did not and do not think like 21st Century North Americans do.

So Google may be changing how we think. So what?  Everything else already has. We are, however, luckier that those hunter gatherers. Thanks to our experiences and our history with our tools, we understand (perhaps better than most) that actions have consequences, that new tools can change things unexpectedly. We have let the genie out of the bottle- I doubt Google is going to shut down any time soon- but we can moderate how it affects us.

Besides, it is not as if this will eradicate long-form thought like Carr worries it does. As long as being able to think deeply is useful to us, we will retain it. After all, we learned how to read once, didn’t we?

Film did not kill the book.

Books did not kill story telling.

Writing did not destroy the art of memory.

But they did give us more options.

What was I talking about again?

Scott W. E. Dickinson