Easy solution to AT&T GETS problem

So I was reading The Switch this morning, and I saw a post from yesterday by Brian Fung about AT&T not being able to prioritize GETS traffic for emergency responders and the government in national security crises, and I thought I would give them a free solution to this problem.  For the full article, here is a link. 

 

For a brief overview, with the current copper lines the phone companies are able to prioritize calls after a cop/firefighter/FBI agent/President of the United States enter a special code in times of great emergency when the phone lines are tied up by people checking on loved ones, or calling 911 to report an obvious emergency.  Think the attack on the twin towers, or hurricane katrina (examples in the article).   Now that AT&T is proposing expanding their VOIP-over-Fibre phone service UVerse to the rest of the US, they are claiming that the nature of the internet means that they cannot prioritize specific VOIP traffic in the case of an emergency.  There could only be 4 possible reasons for this response:

1) Everyone working at AT&T is a moron. Not out of the question

2) They are lying to the US Government in order to shave a few points of margin off their service at the expense of lives. Most likely.

3) They are in fact doing so, but in secret and only for national emergency issues, leaving out first responders. Sneaky.

4) This is 1978 during the DARPA period of the internet. Its not.

Here is a simple solution that I crafted in the shower for a problem many organizations deal with for VOIP and other IP traffic every day. Took 15 minutes and a bit of drawing.  Its high level, but based on the principles of Source/Destination IP Prioritization   ATT Solution Simple

Basically what this says is that a Priority Phone call user (Firefighter lets say) during an emergency can dial a special pin, which opens up a call to a special VOIP server or or is prioritized on the VOIP system in question. That number is a forwarding number similar to the 1-800 long distance forwarders we all know and loathe. When they get that dial tone, they type in their number and it makes their call to their destination, which can be a special line or a regular line. All prioritized IP traffic based on source and destination. Very easy to do.  Here are some links using Cisco gear as a baseline.

This link describes ‘Policy based routing‘ on Cisco IOS devices (routers) that would need to be programmed to prioritize traffic coming to and from the Priority VOIP server.

This link is a little less technical and gives an overview of traffic shaping.

Because UVerse is an IP based routing platfrom it follows the same rules about traffic shaping as other WAN networks. AT&T’s argument is that because all VOIP traffic looks the same to their routers, they cannot differentiate emergency traffic from regular traffic so cannot use traffic shaping to relieve congestion. I have proved in less than an hour with a very crude drawing and a couple of links that it is simply a desire to save money on an emergency destination (which can be a ‘Virtual IP‘ that simply prioritizes routes, or the priorities can be dynamically created as this paper proves).

Basically AT&T does not want to create a separate server to handle emergency traffic, so is unwilling to utilize this solution (or come up with a better one) in order to save on some equipment costs and the man hours required to update their routers with the prioritized routes.

If you think this wouldn’t work, or have an even better solution, tell me in the comments!

To a friend

To a dear old friend that is retiring today.
We have had many ups and downs since we started working together all those years ago. You were so fresh and peppy when you came into the office that first time. Colourful, and new. You were so much better than your predecessor it defied logic, how could you be both more fun and more hard-working? Whatever the reason, you were fantastic.
As the years went on, we had our ups and downs. You started to get sick, more and more often. We got scared, and we did not know what to do. There was one dark day in August, 2004 that we thought we would lose you. You almost died on the table but for the hard work of many souls, and we thank heavens every day you survived that day because you were so much stronger afterwards. You got sick less and worked more. You became our friend again.
Fast forward a few more years when your younger brother showed up. We hated him. We still hate that version of him. He was the laughing stock of your family and almost gave you a bad name. I say almost, because you never wavered and still showed up every day to give it your all. Even though your brother tried to push you out, he couldn’t and you kept going and never let go of the job.
A few years later your younger brother, fresh out of rehab with a brand new skip in his step showed up and finally began to take the load off. We knew you were getting tired and needed a break. It had been almost 10 years of you shouldering the load alone by this point. That’s too much for everyone, and no one ever expected it of you.
As the years went on and your brother shouldered more and more of the load, nothing could shake the fact that you are the gold standard. The yardstick by which all others are judged. I hope you know that even though we have tried our hardest to remove your mark from the workplace, its not because we don’t love you, but because we are scared of a day you will no longer be there to hold us up and make us better.
But you deserve this day. It has been too long coming, and we have kept you from retirement too long. Some of us will never forget you, and for me, you will always be a part of my life.
Windows XP, this day is for you.

PS4 vs Xbox One review impressions

So the first comparison reviews of the new consoles are either in or trickling in, and the results are interesting to say the least.  First off, I’m not planning on buying either as a gaming system, I have a pretty nice gaming PC that I prefer to use for many, many reasons.  However, I might buy an Xbox One as a media center (or a PS4 if it turns out that its better), but my media center still has some fight left, so I get to table that issue until there are less unknowns.
The really interesting thing that I’m reading is that while cross platform games have similar graphics settings used, the PS4 is either supersampling (running at a higher than 1080p resolution and downsampling to fit the screen) or is doing honest-to-god anti aliasing.  This is an interesting development that has long been one of the core founding principles of the PC Master Race. It also means there is a chance that I would potentially jump platforms, as aliasing is one of my main complaints about console gaming.

However, the other interesting thing is starting to show.  Microsoft only included 16 ROPs (Render outputs) vs the PS4’s 32.  These are basically responsible for actually putting the rendered pixels where they are supposed to go on the screen.  This probably doesn’t mean anything to you, and it wasn’t considered a huge deal as lots of 16 ROP cards gamed at 1080p in the past.  However, it seems that PS4 games look noticeably better, if its a minor difference, mostly from the PS4’s ability to handle more pixels, which hasn’t been an issue in PC gaming for a long time.

Anyway, aside from AA making a (for me) large visual difference, now that both consoles are out from under NDA finally the pundits are properly alligning this war (engadget even slaps you in the face with it): its about the services, gaming and otherwise. And from that perspective its probably a very good thing Sony has the graphics edge, because they are entering a pure software war with the king daddy of software companies.  And there are the corpses of many similar corporations littering that battlefield.

I think, and I’ve said this before, but I honestly think that the one thing Sony needs is a major service partner, either Google, Apple, Facebook or Amazon.  And without that there will be too many pitfalls and challenges as they attempt to do battle with Microsoft in this brave new technology world. Sony, things have changed since 2006 and I fear that you have not learned that.

The Theories of Evolution and Gravity

I’m sick to death of stupid people everywhere questioning the products of good science, and I know you are too.  This has been a problem for longer than written history and is generally the result of a combination of two sad human conditions: the natural reaction for us to trust the slick talking spokesman with great hair over the dry nerdy fellow with bad skin, and the innate desire to believe that which offers us the best reward for the least effort.

Now, this applies across the boards from our shopping habits to our choice of romantic partner, and I could care less about people decide those things. If you want to overpay for your purse, sweater or boyfriend because you feel better about that particular brand or style, then go for it. All power to you! These decisions are yours to make, and yours alone and the reasons behind them don’t affect me at all.  However, when we begin talking about the direction the human race is headed on big topics like how we came to be, or how we interact with our planet and universe I want more than your feelings on the subject.  I want verifiable data and the conclusions reasonably drawn from it.

And this is where I begin to see red.  Mainly on the topics of evolution and climate change. These are two hot button topics that matter a whole hell of a lot to all of us, and instead of properly discussing and studying how we are to deal with impeding problems on our planet and in our genome, we are still stuck talking about if its happening or not.  Why are we talking about that? Is the science conflicted? Are we seeing large discrepancies in our data sets, or is the data so inconclusive that multiple theories are drawn from it? No. We are discussing it because some people feel they shouldn’t be happening. Huh? Shit, I feel climate change shouldn’t be happening too, but that doesn’t mean it isn’t a hypothesis backed up by a mountain of verifiable data.

The argument goes something like this:

Normal Person: “Man, isn’t it crazy that our industrial emissions are having such an effect on the earth, to the point where we are raising the global temperatures faster than any point previously measured?”

or

Normal Person: “Hey, isn’t it neat that over millions of years we developed all these traits as a reaction to our environment, and that process left us as humans?”

Stupid/Religions Person: “Hey now, we don’t know that for sure. It’s only a theory, not a fact”

Normal Person: “Well, gravity is only a theory too and we know that’s happening”

Stupid/Religions Person: “I can see gravity happen, but we don’t know evolution is happening, its just a theory.”

Normal Person: “Please don’t reproduce.”

This conversation (or a cruder, more real version thereof) happens all the time, every day. On the internet, on television (WTF) and most frighteningly, in our PTA meetings.  It would be funny if it was simply a fringe group exercising their freedom of speech and didn’t impact us, but it does. In so many ways.  First off, when we were first exploring the ideas of natural evolution or man made climate change, they were not theories.  A theory does not mean a guess, it means a mathematical tool that can use a series of data points to predict the results of certain variables.  In plain English (According to Oxford) a theory is:

theory

Syllabification: (the·o·ry)

noun (plural theories)

  • a supposition or a system of ideas intended to explain something, especially one based on general principles independent of the thing to be explained:

When you want to describe a guess that you will be later testing for validity, the word you are looking for is ‘hypothesis‘.  So when you say “My theory is that it was the cat that knocked over the plant, and the dog is innocent”, unless you have hard evidence to implicate your evil feline, you are pulling that ‘theory’ out your ass and you do not have a theory. You have a hypothesis (or actually, wild guess, but lets keep it simple).

So when someone argues against the “theory of evolution” or the “theory of man-made climate change” they are arguing with conclusions drawn from a ton of data.  Why is there so much data? Because instead of moving on the follow up questions, we are continuously forced to test the same hypothesis over and over again.  This is a huge problem, and it is seriously slowing down scientific progress.  In the world of science if you want to challenge a theory, you don’t just say ‘its not happening’ and move along, you develop an alternate hypothesis that fits the facts. If you can do so, then you enter the realm of having multiple theories to describe the same event or occurrence, like we do when it comes to the foundation of the universe.

In conclusion, if you want to challenge either of these two theories, you need to come prepared with some other idea that fits more of the facts than the prevalent theory does. Then you will have a debate.  Until that time, all you have is a bunch of stupid people arguing against reality.  Fun side note, both evolution and climate change are much better understood than gravity.  Even though we are well acquainted with the effects of gravity, its cause is still very much unknown and there is no workable theory to describe it.  At best we have a hypothesis of the ‘graviton‘, which is pretty flimsy at best.

The Smartphone wars and why the entire argument is moot

First off, let me get the disclaimers out of the way.  I have owned or used for work every major brand of Smartphone except for Windows Phone 7/8, including Palm, BlackBerry and Windows Mobile. I have both owned and used multiple iPhone models and I have been, to one degree or another, actively involved in the Android development community for years and have used an Android smartphone personally since Donut (Android 1.6).

So when I say it doesn’t matter what brand or platform you are using, I mean it.  One of the most controversial topics on the internet is iOS vs Android vs BlackBerry vs Windows Phone.  This is a more frequently debated topic than left vs right wing politics or atheism vs theism.  Which is shocking, because of two things. One, its a product that you buy, so why care so much?? The second, is that they all serve the same general purpose in the same general way, without any exceptions.

First, some background.  Through the 80’s and 90’s there was a commercial debate over what platform being sold would power the worlds computers.  First there was Apple DOS and IBM DOS, the latter powered by Microsoft code.  Then came Mac System (Later Mac OS), and MS DOS, along with IBM’s OS/2.  Throughout this time there were various flavours of UNIX powering workstations and mainframe-like systems. The reason the question of who would provide the dominant operating system was so important is because most computing tasks were ‘workstation’ style tasks, meaning they were accomplished exclusively on a single computer by software running on that computer accessing files on storage directly connected to that computer.  For example, accounting was done on a spreadsheet in Lotus or Excel, saved to a hard or floppy disk.

Because of this disconnected workflow, it was extremely important to control the platform, the operating system, as that allowed you to not only sell the platform to OEMs and directly to consumers, but to extract value from the software developer in terms of certification and validation testing.

So fast forward to today, Windows clearly won the workstation platform war and Microsoft is still making money hand over fist licensing it out and selling Windows Logo Certification to everything from video cards to mice to computers to weird headbands that can read your mind. But hold on, there is a new breed of devices with new operating systems vying for dominance in the market place.  And with a new form factor (touchscreen devices), come pundits drawing battlelines like its 1993 again.  But the world has changed since 1993, and I’m going to show you why its not so simple a formula, and why Google is not guaranteed dominance simply by controlling Android.

Today, the modern workflow is not simply “load program, open file, work on file, save file, close program” like it was 20 years ago.  Today, software is expected to interact not only with the same software running on many other computers, but on complex data sets hosted in multiple locations from multiple devices and users concurrently.  The other side of the coin, is that even tasks that used to work in ‘workstation’ mode (as defined above) are now operating in the ‘client-server‘ model, where the computer in front of the user is simply used as a gateway to access information and operations on a computer (server) physically separated. In layman’s terms, we access our information ‘in the cloud’ or ‘on the internet’. Everything is offered as a service, in computer speak, which means that you connect up to it from a ‘client device’, either through a web browser or a ‘thick client’ (app).

Because of this fundamental shift in how we process information, it really doesn’t matter what device you use, as long as it accesses the services you need access to (and in a reliable, functional manner). This phenomena is most noticeable to Mac users, who even 5 years had to generally keep a Windows installation or computer handy for certain tasks whereas today there is almost no limitation to running Mac OS, as most everything you will do on your PC is available as a web service (or an equivalent). Today, the decision between Mac OS and Windows is an aesthetic (and economic) one, rather than one rooted in needing certain functionality not provided by the other.

So in today’s computing environment, all someone ‘needs’ is access to the online services we use every day, email/calendering, social media, media services (music/video), and games.  Because all of the functions have been lifted off the device and ‘moved to the cloud’ (god I hate that term), the only ‘need’ when picking your device is access to the platform, which is now the internet.  In the 90s, people largely chose their Operating System of choice because of their needs, as in they need access to Lotus Notes so they need a computer that Lotus Notes will run on.  Today, they just need to access stuff, and there a multitude of ways to do so from any device.  So they make their choices based on ‘wants’, which is fine, but its not a big deal.  They ‘want’ access to the Play store, or the App Store.  They want access to this game instead of that game.  But ultimately, every major service is mostly available to every operating system, which is great.  But it does make the question of who will own the platform moot, because the platform is the internet.  The game in town has thus shifted to who controls the services, and that is easy: Apple, Google, Facebook, Microsoft and Amazon. Almost without exception. Even 3rd party services will generally use one or more of those companies’ identity platform to handle user registrations.

This new world will never see the same result as the one from the 90s, because it doesnt need to.  We can have multiple OS’s running on an infinite number of devices accessing the same or similar services, and the world will work just fine like that.  If someone discovers an exciting new use for their iPhone, it will be available to Android users soon, don’t worry.  Stop arguing and be happy.  In the question of who wins between iOS and Android, the answer is both (or the Internet).  Because both operating systems act as excellent platforms for thick client software, talking to the same services, there is no need for one to die so that the other can flourish. So stop screaming online about how your piece of glass is better, because in the end, they aren’t really SmartPhones, they are SmartClients.  Phone service is just one of the many services they act as a client for, and they all do it pretty damn well.

Why this is NOT the Next Generation of Gaming

Since this week begins the release of the new Sony and Microsoft game consoles, and Gabe Newell is doing his absolute best to keep his SteamOS / SteamBox products in the front of your brain, now would be a good time to let my opinions on the subject loose.

First off, describing “Generations of Gaming” in terms of console releases is stupid.  Wrong mostly, but stupid too.  Why? Because industries this big don’t turn on a dime, and constantly change in small but measurable ways.  It would be much more accurate to describe the history of gaming as a continuum, with console releases being just some of the milestones on that line.  The best example of this is the launch PS3 and Xbox360 vs the versions that ship today.  Both systems have undergone extreme changes, from UI to basic capabilities from the day they shipped to today.  The Xbox360 launched without service apps, a web browser or the ability to natively play any video other than Windows Media formats.  They have tweaked and changed the 360 to support a couple of internet services, some extra codecs, and added in a web browser and voila! a new device entirely, much closer to the vision of the Xbox One than the original launch 360.  The same could be said of the PS3 with significantly improved online services through the Playstation network and the addition of web apps as well, although Sony also reduced functionality at the same time with the very controversial removal of Yellow Dog Linux support from their consoles, as well as the revolving door of backwards compatibility strategies they used.  Because of these changes, if you were simply looking at a list of specifications you would put a 2005 Xbox 360 in an entirely different generation as a 2013 Xbox 360.  Because of this fact I don’t pay much attention to ‘this generation’, ‘last generation’ or ‘next generation’ and look simply to capabilities.  For the purposes of this article I’m going to compare gaming hardware from a few different perspectives to prove my continuum hypothesis.

The first point of comparison is, of course, graphics capabilities.  There are a few different things to consider here, although the one most widely discussed and compared is raw computing power as measured in FLOPS (although the older chips did not measure general computing performance, so we’re going to go old school and measure Polygons Per Second like we’re still in high school).  This is actually the silliest of all hardware properties to compare against, but it is what it is, so let’s start there.  First lets go all the way back to the first game consoles and PC Graphics cards designed specifically for polygon pushing 3d: The Sony Playstation, the Nintendo 64 and the Voodoo Graphics systems as contemporaries.

Gen1

As you can see, the winner here is the Voodoo 1 by about 2 – 10x. So for the first 3D Generation, the PC started with a commanding lead which only grew throughout those years.  Skip ahead to the next series of consoles, and we begin to see an interesting evolution.  Let’s compare the ‘second’ generation of 3d gaming, the Playstation 2, the Nintento GameCube and the Xbox, along with the high end graphics card of the day; the GeForce 3.  So, by this point 3d graphics were the expected norm with sprite-based 2d games all but gone (although they do make a reappearance a few years later on the web as flash games) and Microsoft had waded into the console arena with the Xbox.  So who had the more powerful graphics? The console industry, or the PC.

Gen2

There you can see that the Xbox and the GeForce3 were neck and neck, leaving the rest of the pack behind, considerably.  Why is that? Well, the Xbox WAS a PC. It was a Pentium 3 with a Geforce3 (basically) and it helped Microsoft establish a foothold in the console industry.

As we head into Generation 3, its clear that, with the exception of the xbox which basically was a high end gaming PC stuffed into a little black box, the console industry has definitely lagged behind in terms of raw power.  Things spread out again when we hit the current generation (assuming the PS4 and Xbox One are ‘next generation’).  This time around Sony and Microsoft were the big players, and they wanted to convince everyone that their machines were space age tech, man!  Faster than any PC you could buy at the time! Well, not really as we’re about to see.  The Playstation 3 used the nVidia Reality Synthesizer (RSX) which was really just a rebranded 7800 GTX with less memory bandwidth and less pipelines, the Xbox 360 used the Xenos Chip and the Wii just sucked, so we won’t talk about that in the graphics portion.  Basically its about as fast as an Xbox1, and slower than whatever GPU is powering your smartphone.  This is the generation where we were promised PC-quality graphics? Did we get it? Not even close.  The high end graphics card to compare this generation to is the nVidia Geforce 8800 GTX, which was the first Unified Shader Model video card and changed the graphics game forever.  It came out a little bit after the console releases, but it’s the reason that splitting video games into console release generations is stupid.  Just so we have a baseline, lets discuss what the 8800 GTX was.  It was the first ‘real’ GPU, one capable of running any general purpose code on programmable shader processors that could run intense parallel calculations.  In graphics-centric terms, it also allowed the GPU manufacturer to create a continuous die of graphics processor, rather than splitting the work into Vertex (Polygon) and Texture (Textures…) shaders, which was always a balancing act on what current games would require vs future games.  It also kicked the crap out of anything else around for 2 years straight.

Gen3

These numbers only tell half the story, because during this time is when the graphics industry moved from fixed-function shaders to the unified shaders of the 8800GTX.  The RSX in the PS3 used fixed function, and Xenos used a unified shader model.  This is why Xbox 360 games have had to make far fewer tradeoffs on new titles and tend to look a lot closer to their PC counterparts.   This brings us to today, when Sony and Microsoft are releasing their ‘next generation’ consoles, with all the glory that entails.  So, in terms of numbers, has the use of standard PC parts (not really, but ok) made a difference in closing the performance gap? Well, sort of.  The gap smaller than it was in the PS1/N64 days and bigger than it was in the PS2/Xbox360 days in terms of raw numbers.  Both the PS4 and the Xbox One use a variant of ATI APUs based on Jaguar CPU cores (8 of them) and Graphics Core Next (GCN) GPU.

Gen3and4

Gen4

As you can see, the PC side is still kicking ass in terms of raw power, so if hardware were the determining factor, wouldn’t the generational shifts be measured based on dedicated Video card releases instead of console releases? Or on feature sets based on OpenGL or DirectX capabilities? Or, why don’t we choose an entirely new metric?

This brings us to the next obvious metric to judge video game generations on: Features.  This is actually a lot more nuanced a conversation than some charts with processing power, because an Xbox1 has more in common with a modern PC than a 386 from 1992 does, and an Xbox360 from 2013 more closely resembles an Xbox One than it does its own 2005 launch version.

When the original big daddy of consoles launched (NES), it could play games. Thats it. No UI, no non-game interface at all.  The same was true for the SNES, and again for the N64.  But Sony had a different path in mind when they released the Playstation, likely owing a lot to its success.  It could play CDs. This was a very big deal at the time, since it meant you could have a living room CD player without another box.   This was the start of the console-as-entertainment center phenomena which is basically considered the norm today.  When the Playstation 2 and Xbox’s launched one of the key differentiations went in Sony’s favor, in that the PS2 was a DVD and MP3 player out of the box, when DVD players were not common.  The Xbox had the capability but required the Xbox Remote peripheral to make it work.

However, partway through the Xbox’s lifecycle, something amazing happened.  Because it was based on generic x86 PC parts a group of hackers got together and wrote a new operating system for it, based on Linux called Xbox Media Center. You might know it better by its modern name XBMC and it was INCREDIBLE.  This development forever changed the course of console development by putting full blown PC media capabilities on a console for the first time.  It could play MP3s, it could play NES and SNES games (what??), it could play downloaded video files sitting on a network share. It could do it all, and would greatly influence both Sony and Microsoft in their next releases.

By the time the PS3 and Xbox 360 were rolling off the Chinese factory lines, the video game world had matured considerably.  It was clear that this was not a child’s hobby, and that there was serious money to be made.  Sony spent billions trying to create a machine that would outperform the 360.  They had custom processors made, with a custom instruction set trying to squeeze a little more performance out of the silicon of the day.  Whereas Microsoft fell back on using off-the-shelf PC hardware, just not any that was previously associated with Microsoft. They used Macs. Or the PowerPC processors found in Mac G5s to be precise.

Even though Sony tried their damnedest to make a faster machine, as we saw above, they failed miserably.  Both companies advanced their media feature sets to more closely resemble the Xbox Media Center machines, with the ability to play video files off of special servers (DLNA) and with some pretty extensive MP3 support.  There were issues with codecs that shipped, and a lot of work needed to be done on the back end to get a good experience, but the support was there.  The Xbox360 could even act as a Windows Media Center extender and was a fantastic machine if you had that infrastructure running in your house.  And that’s not even getting into the HD Disc format war, which was REALLY stupid since the internet kicked the crap out of both, but Sony had the major advantage there with native Blu-ray support.

However, the real generational shift around 2006 did not happen with the media functionality, although it continued to improve throughout the lifetime of both consoles, dramatically changing the advertised feature sets over 5 years.  The real generational feature shift came from little ole Nintendo, with the WiiMote.  This little bastard made gaming fun for million of people who hated sitting on a couch holding a weirdly shaped vibrating plastic toy.

This change in control methodology was so revolutionary and well received that both Sony and Microsoft scrambled to come up with their own version of it.  Sony used cameras to achieve this feat, with the Playstation Eye and Playstation Move, while Microsoft took the concept a little farther with Kinect.  This kind of extraneous control would play a key part in the next set of releases, with the bundled Kinect controversy.

Both the Xbox 360 and the Playstation 3 changed so completely in every way throughout their lifespans, that if you look at anything other than specific hardware selections, the change in generations clearly happened around the release of the Kinect, not of the Xbox One.  There were web apps (Netflix, YouTube, Internet Explorer and the PS3 Browser), far expanded local media options and even alternate control techniques (Kinect and Move), as well as the digital distribution of install-able games.

So, now we come to today with the release of the Playstation 4, and the Xbox One next week, asking ourselves “what is new this generation”. Well, the answer is all of the above, greatly improved Media capabilities, excellent control schemes for our devices and a consistent social media experience.  Except these things were all added to last generation hardware.  Yes, games will look better, and the additional RAM will allow for new experiences that the current crop of consoles simply cannot properly do, but the real feature adds are all software at this point, and speak to the evolution of the console as a concept.

So, with all that in mind, I am going to propose the following new terms of ‘generations’ of gaming, that take the entire wide world of gaming into account.  It will be fuzzy, and much less absolute that the media using console releases, but it will tell a much fuller story.

First Generation of gaming:  Lets call this the Sprite generation.  This ranged from Asteroids to Super Mario Bros and spanned the mid 1970’s until the mid 1990’s.  Games were created using sprite based 2d images that moved around a screen.  Smaller images led to more lifelike animations but the basic principles stayed the same.

Second Generation of gaming: This one would be called 3D Generation 1, or Fixed Function 3D.  This started with PC games like Quake, but moved to consoles with the N64 and the Playstation.  There was a transition period in the console world, where some Neo Geo games used a combination of sprites and rendered models to get more 3D detail before the hardware could really support it.  The end of this generation began with the release of the GeForce256‘s hardware Transform and Lighting engine and is just now completing with the release of the Playstation4. However, with the rise of mobile games on fixed function GPUs, this generation might have a few more years left in it.

Third Generation of Gaming: This is the current graphical generation, and it would be called General Purpose 3D, where the GPUs inside our hardware can do more than just 3D graphics, but can run many alternate parallel operations, allowing for greater interaction with particles and a closer adherence to the laws of physics.  This allows us to not only leave bullet holes in walls, but actually blow holes in them. This generation started with in part with the GeForce256 all the way back in 2000, but has really taken over since the GeForce8800 and Xenos chip in the Xbox360 showed how much more flexibility this kind of graphics hardware gives developers.  This will end when the processing power of Ray Tracing becomes a reality and our images become generated by calculating the path that light would take in the image rather than building them from triangles.

Forth Generation of Gaming: This would be the Internet Generation, and has less to do with graphics and more to do with functionality.  This is the era of internet connectivity with gaming, that again began with Quake and Half-life, moved to the console’s with the Dreamcast‘s web browser pack and 56k modem and was solidified with the Xbox and Xbox Live.  Even today this generation continues with gaming hardware being largely a client for internet services.

Fifth Generation of Gaming: I would call this the Media Extender generation, and is going full swing with Microsoft’s focus on TV and video with the Xbox One.  This began with the Playstation playing CDs and has been a major influence on the rest of the industry.  This generation will continue for long after the PS4 and Xbox One have lost their luster, but it wont go forever.

Sixth Generation of Gaming:  This is the other ‘generation’ that we find ourselves in now, and I refer to it as the Advanced Control generation.  There were some attempts in the past to get other forms of control going beyond the basic 2-axis control + buttons that both consoles and PCs have enjoyed for decades, but the real shift was the release of the Nintendo Wii.  It allowed for some very interesting and unique game experiences by use of the free-flow controller wand they created.  This was so successful that even though it didnt make a huge dent in the AAA gaming sphere, Wii games are among some of the highest selling games of all time.  Because it didnt feel the same as an NES or an Xbox.  This has been taken and run with by Microsoft and the Kinect, and the new Xbox One will be almost completely controllable this way, with voice and gestures. This generation is here to stay, and never again will go back to a single method of control.

So there you have it.  Other than marketing, there is no actual reason to call the new consoles a new generation.  Their graphics were almost matched in the previous generation by PCs, and all the software advents they bring to the table were worked into the last models through software during their lifetimes.  New control schemes, new graphics techniques and the media functions all existed to some degree or another for the better part of a decade now.  The main point is, that advances in gaming are evolutionary not revolutionary and usually occur mid-cycle, not between them.

Please feel free to tell my why I’m wrong in the comments.