Fernando Sacchetto analysis actuarial, demographics, generation, singularity, technological singularity, technology, transhumanism

Being highly interested in the concept of a technological singularity, I’ve often wondered how far off that is. A lot of people have of course weighed in, based on the rate of technological development in the world… but something seems off about that approach. Main problem being, between Moore’s Law and many informational structures in place already today, we should either have crossed that point by now, or should be nearly there, and that’s certainly not the case.

Then it hit me – our main bottleneck isn’t technical, it’s human. (This comes from a career in government work, where I’ve realized just how obtuse the people in charge of managing everything can be – and from observing the private sector, it’s not much better, outside of the tech industry.) So I decided to plot who’s how old at each point in time, based on what sort of technology was around them in their formative years, and as a consequence, how comfortable they are around each kind of technology, and how fully they can realize that potential.

A couple caveats. Firstly, my approach doesn’t consider the politics or ideological considerations that may help or hinder technological developments, just the demographic aspect. Second, this isn’t meant to make any generalizations about the technical ability of people of any given age – there will always be people who buck the norm (especially when it comes to older people who are more tech-savvy than their age peers; after all, someone had to blaze that trail). I’m just talking about at which point most people at any given age are exposed to any given technology. And finally, I’ve made this in my spare time mostly to amuse myself, it’s no serious study of any sort – all the numbers are approximate, and all the data are rectally retrieved. Which means that you’re more than welcome to critique and correct anything below – and if you have any source at all (which I don’t), I’m especially glad.

All that said… here’s my (completely unfounded) Demographic Analysis of the Technological Singularity Timeline! First, there’s the table where I’ve plotted ages vs. timeline. Then I’m explaining my assumptions about the ages and timeframes involved, and finally there’s my reasoning/conclusions.

MASTER TABLE

KEY:

Rows: Timeline, by decade

Columns: Age brackets

Cells: Generation (by date of birth) in each age bracket at each point in time

Blue cells: Most societally influential age brackets

Bold: Key generations of the technological singularity

(Note: When translating, I missed a cell in the first row. “Até 1900” means “Up to 1900”.

Age-bracket explanation/analysis

Child (0-10): Early formative years. Will form the basis of one’s psychology. While not that important for technical learning, it’s crucial for the sake of feeling at home around technology.

Teenager (10-20): Late formative years. While psychology is mostly set, it’s still pretty malleable. Exposure to technology at this age will still lead to a rather tech-comfortable attitude, especially if technology is used for social contact (the main thing being developed in teenage years).

Young Adult (20-30): These people are entering the job market and mostly form the base of the workforce pyramid. So, while they do have a fair amount of influence in society, they’re generally not in charge of decision-making. Exposure to technology at this age will lead to a fair amount of proficiency, although not necessarily a tech-comfortable mindset.

Mature (30-40): These people begin to be really influential in society, since here’s where you start to see middle-management, as well as younger entrepreneurs and politicians. First exposure to technology here has a somewhat diminished effect, although it still helps a lot.

Middle-Aged (40-50): Here we start to see upper management, as well as the younger people in charge of large organizations or important government offices (as well as most of the remaining middle-management positions), so much greater influence. First exposure to technology here tends to lead to rudimentary (although still often enthusiastic) practical application.

Senior (50-60): These are the bulk of upper management, as well as many of the most important leaders in industry and politics, making them about the peak of societal influence. First exposure to technology here is often met with skepticism, and leads to limited practical application even when it’s not.

Elderly (60-70): People begin to retire here. While this age bracket includes many of the highest offices in industry and politics, they’re not managing that many day-to-day decisions, so I’m considering their influence somewhat smaller compared to previous age brackets. Exposure to technology at this point tends to be met with resistance and skepticism, and practical application tends to be grudging.

Ancient (70+): Most in this age bracket will be retired. While there are definitely very influential leaders here, those are few in absolute numbers, and their decision-making tends to be more removed from day-to-day application, so overall influence is rather diminished. Exposure to technology tends to be met with severe resistance.

Timeline decades explanation/analysis

Up to 1960: While there were certainly people working hard on developing informational technology, most people didn’t get to see any of that. As far as most people were concerned, computers were just a sci-fi concept.

1960-70: Enterprising engineers in universities and research centers were busy building better and better hardware and developing the foundations of modern software, and computers begin to see some limited practical application, say, in the military. Outside of such places, though, people may begin to see a few references to computers in the news, but otherwise it’s not a part of their lives.

1970-80: Practical application of computers begins to expand to banks and a few other companies, and the number of people who encounter them (in a professional or university context, so in their 20s-40s) starts to increase, but is still rather small. Networks begin to be a thing, mostly in universities.

1980-90: Computers start to become more popular and accessible, showing up at a (still not that large) number of households, and therefore used by a few children and teens. They’re used in a lot of companies and other professional context (not to mention universities), and even those who don’t use them personally start knowing people who do or seeing them a lot in media, so they’re at least a familiar sight. Networking such as BBS’s is thriving in universities, but scarcely used elsewhere.

1990-2000: This is a major change of gears for the popularization of technology. Not only are computers a lot more accessible and widespread, showing up at a large number of households (verging on a majority of them) and pretty much ubiquitous in corporate settings, but the internet explodes in popularity in this decade, leading to the infamous dot-com bubble. Financial turmoil notwithstanding, most people become at least familiar with the internet in this decade, even if they don’t actually use it, though a lot of people still don’t quite know what to do with it.

2000-10: The Web 2.0 age. Firstly, anyone who wasn’t using computers in the former decade is using them now, like it or not. Early smartphones mean a few enterprising people have them wherever they go. The internet also becomes pretty much built-in to any computer, so most people are in it. Moreover, that’s when social networks and other such social environments such as forums really take off, giving even people who don’t have any particular interest in technology something to do online. Most children use computers in a limited way, under supervision, and few teens are not chatting with their friends online whenever they can.

2010-20: Present time. The trends of the previous decade have only deepened. Computers (with internet) reach even the most underdeveloped and remote places, and are a first-order necessity in developed places. You’d have to make an unreasonably large effort to not use one, at least if you’re working-age. Not only are practically all businesses dependent on the internet, but social media have expanded to be the main form of interaction for many people. The popularization of smartphones and tablets means a large number of people, perhaps most (depending on which point of the decade and which place you’re talking about) have a fully-functional computer in their pocket at all time. Tablets also mean most children have daily contact with technology, at least for games. (Teens are fully-immersed in online communities as a rule, more so than adults.)

2020 and later: That’s of course the future. It’s what we speculate about here.

Analysis of generations (by decade of birth)

Going from the premises in the previous two sections, here’s how I’m reading each decade-based generation:

1930-40 and older: A few of these folks started seeing computers in a professional setting in their 40’s and older; many retired without ever seeing any. Therefore, their relationship with technology, if any, tends to be grudging at best. The 1930-40 generation specifically was most influential in the 60’s to 80’s, when you could perfectly well get away with not liking computers.

1940-50: These folks generally started seeing computers in a professional setting, between their 30’s and 50’s. Which means a lot of them might remain skeptical of computers, as a newfangled thing that arrived just to make their lives harder, and just became annoyingly more ubiquitous as time moved on. Their height of influence was in the 70’s to 90’s, so computers became more and more relevant as they became more and more influential.

1950-60: A few of them begin seeing computers in their university or work in their 20’s, but most started using them in their 30’s to 40’s. Which makes them a bit more amenable to using computers in a major way than the previous generation, but still not fully immersed in that culture. Height of influence in the 80’s to 00’s, which means they effectively ruled over the transition that made computers practically the most important part of life.

1960-70: Often started using computers in their 20’s, at most in their 30’s (and a few even in their teens). Generally found the internet in their 30’s or at most 40’s, so while it’s not a primary need for them, they’re pretty used to it. The number of decision-makers who really “get” computers starts to increase dramatically beginning at this generation. Most influential between the 90’s and 2010’s, and probably the most influential generation right now – while maybe not its primary users, they shepherded the internet into taking over the world.

1970-80: Some started using computers as teens, most of the rest in their 20’s, and were kicking around online no later than their 30’s. While their mindset wasn’t quite formed while using computers, they’re still mostly comfortable around them and the internet (not everyone in this generation, though). Came into middle-management during the Web 2.0 era, and are climbing to higher ranks now – while most may not have the sort of vision of people who used computers in their formative years, they still get them enough to use them reasonably well in management.

1980-90: This is the first generation where a few people started using computers as small kids, and a significant number were online in their teens. Even the “slower” people in this generation got into social media in their 20’s. Most of this generation is fairly comfortable around the internet, social media, and technology in general, making them the earliest generation that might really have a vanguarding role in bringing about the technological singularity. Coming into middle-management right about now, so we’ll still see how these folks will influence society.

1990-2000: The second generation I’ve flagged as having a major role in bringing about the singularity. These are folks who usually got into social media as teens, a few of which were already playing with computers as kids. Very comfortable around technology and its informational/social applications. Right now, these folks are entering the job market, most of them still to low on the corporate ladder to have much influence.

2000-10: A good number of them started using computers as children; at the very latest, they’ve gotten into social media (after it was already hugely influential) as teens, in recent years, often already through smartphones, meaning they’re online most of the time. They’re now in their teens, so we’re still yet to see how they do at work or creating culture, but their mentality is shaped in a big way by technology, leading to a likely talent to unlock the full potential of it in ways previous generations have trouble doing.

2010-20: These are of course small kids that were born recently – but it bears mentioning the people at this generation and later are exposed to technology and online environments from a very early age, usually right as they’re beginning to talk and later read, making their mindset completely shaped by the information-technology revolution.

Conclusions

Putting it all together – if you look at the “master table”, the intersection between the singularity-vanguard generations (born between the 1980’s and 2000’s) and the societally influential age brackets (30’s to 50’s) highlights the period between 2010 and 2040, when this 1980-2010 combined generation gradually takes over the more influential positions in society and rises to higher positions, until it spans all of the more influential demographics in the 2030’s. (At that point, even the older leaders in higher positions are usually of the 1970-80 generation, who are already fairly technically savvy.)

What this means is that I expect not just the pace of technological advancement to increase along these three decades, but also the influence it has in decision-making, and the tech-savviness of people in higher positions of power, in a feedback loop that increases technological development even more. The 2030’s should be an especially revolutionary decade, when all the demographic pieces are fully in place to enable a full-on technological singularity, but the technological revolution should have already started in the 2010’s. Based on this reasoning, if anyone asks me when do I expect us to live in a post-singularity world, I’d say “by 2040”.

One final note: I came up with this table a few years ago, very early into the 2010’s. Let me just say… as the decade wore on, I haven’t seen my theory contradicted, in my opinion. As social media grow into an ever more powerful monster, things seem to be going the way I expected them to back in 2010, when I started to formulate my own thoughts on a singularity based on a network of ever more connected people eventually evolving into an emerging, higher consciousness.