Performance Enhancement in 2032: A Scenario for Military Planners,
version of this article was written for a transformative technology scenario
project set in 2032, for the U.S. Army Logistics Transformation Agency
in 2004. Very minor changes since as follows: Dec 2008: Renamed "Digital
Twin" to "Cybertwin" (CT), a simpler term. Dec 2014: Renamed
Cybertwin back to "Digital Twin" (DT, or simply "twin"),
as that term, along with "personal software agent" (PSA), began
to be used more frequently among peers and the public. (In retrospect,
"Cyber" is too arcane a word).
Jan 2, 2032. Lucas Hightower, Human Performance Analyst, Army Corps of Engineers.
Hello. I'm a 22-year-old Harvard-educated Human Performance Analyst, or HPA. I work in an Army-Civilian development group called the Army Corps of Engineers. Have you heard of it? I like to read, and my friends would say I'm pretty reflective. I love my job, and spend a lot of time thinking about human performance. If you are interested in that subject, let me give you a window to my world, and you can tell me what you think.
We'll start with an interesting question: What is the best definition of the phrase "human performance enhancement" in a world where technology accelerates so rapidly that biological human beings effectively stand still by comparison?
There is an old futurist saying: "Human nature doesn't change, but our houses (read: the computing and communications technologies all around us) get exponentially better every year."
Back at the turn of the century, there was a big debate about what human performance enhancement (HPE) actually was. Many thought it was about trying to reengineer the biological and psychological human. Others thought it was mostly about improving our built environment and increasing the intimacy of our connection to well-built machines, so that our innate biological and mental abilities would be better used, and our many natural shortcomings better protected against. Since the 2020's, the latter view has definitely won the day.
For several generations people have tried all manner of ways to improve the human organism using various bio and cogno technologies. But it turns out that our incredible nonlinear complexity, delicacy, and biological, psychological, and social resistance to change all greatly limit the effectiveness these schemes. Even our best medical therapies today do little other than restore the performance of the unhealthy to that of the mean. How few foresight professionals would have guessed this at the turn of the century!
All those old twentieth-century bioenhancement ideas about genetic engineering of humans, super-drugs for mental performance, extreme life extension, and implantable brain-machine interfaces (except for people with disabilities), turned out to be like the 1900s' ideas about flying houses and atomic-powered vacuum cleaners: possible in theory, perhaps achievable some day, in theory, but always outcompeted in practice by far more powerful, efficient and less controversial external digital alternatives every step of the way. There's just no better bang for the buck, and the social and political repercussions are far less bothersome as well.
Sure, you can find exceptions, but they either make negligible changes, like the so-called "super vision" you can get from your local eye doctor, or they aren't things you'd give to large numbers of people on the planet, which makes them valuable to a few, but uninteresting as national priorities.
Some examples might help here. It's true that if you are a marathon runner you can find bioneers who will replace your leg bones with titanium implants, with some health risk and at great cost, and you will run at least five percent faster, but you will then be barred from all the major competitions, and relegated to the biomodified or paralympic races, which are considered much less prestigious by the general populace. So what will you have really gained for yourself or the planet? It's a much more interesting challenge to design better wearable marathon coaching software (e.g., a more intelligent computational "house" for the human) that can be documented to give the average user a 2-4% performance boost annually (for most runners), and can be freely shared with everyone who wants it, worldwide. The performance yield for widely-adopted enhancements is so much better, in cost/benefit terms.
What about medications? It is true that you can stick drug delivery implants into your body for all manner of things, but except for curing disease, such systems have been shown to add only marginal benefit. As soon as you circulate any drug in your body, the first thing your cells do is downregulate their receptors to maintain independence (remember "homeostasis" in freshman biology class?), which makes them increasingly less effective in subsequent doses. That's why you can't give a human a temporary cognitive enhancing drug (memory, attention) without causing a "stupid period" afterward. The best we've ever been able to find are things like caffeine, which has as many people trying to get off it as are currently satisfied with its temporary alertness effect. In sum, these effects are now known to be mild by comparison to the measurable performance benefits of enduring natural mental states like flow (see Mihalyi Csikszentmihalyi, Flow).
So unless you have control systems that run from the DNA outward, which we have no idea how to build, and can't ethically try to figure out by experimentation either, implants are very crude biomodification systems, no matter what Big Pharma will tell you. Using "cognitive enhancement" drugs in implant systems ("drugbots"), even caffeine, will easily burn out your receptors or addict you, which is why you can only install them under medical supervision, and are recommended to use them only for short transitional periods, like almost all the psychiatric drugs. Unless you can prove medical need, trying to jack yourself up with drugbots is just a recipe for pushing you into a subculture, and making it harder for you to get government and professional clearances. So where's the benefit?
Meanwhile, our best computers double in complexity every eight to ten months these days, down from every 14 months back at the turn of the century. How could biology compete with that? To get in the zone today, most everyone uses old, proven rituals and their personal software agents ("digital twins", or just "twins", more on these later) to help them achieve peak performance. A top, nutrient-optimized and toxin-free diet, consistent and body-type-optimized exercise, and a great living environment will beat our best top-down interventions every time. So if you want to max out your performance, get a better BioBed and robokitchen, and some great virtual (or real) performance coaches in your avatar network, who will push you to enter well-chosen competitions. You'll achieve a personal best in no time.
In fact, the better our systems biology has become, the more we've come to understand that humans simply can't be improved much more in their biological abilities. This is kind of subtle, but because DNA has so much legacy code that is protecting older systems, we now know that we have just about "saturated" our ability to make genetic changes to the human organism. Our wetware is just too delicate, old, slow, and sloppy to improve it significantly, and we have no idea how to reengineer it to work better from the ground up, because unlike computer tech, we didn't build biotech in the first place. That's why those few bio modifications that might do even a little good are mostly considered too dangerous to be publicly allowed. Nowadays you need expensive licenses be a biohacker, thanks to homeland security.
That isn't to say that studying biology isn't important. The more we learn about systems biology, the better our brain scanning and neuroscience gets, the more payoffs we gain when we transfer this knowledge to the realm of "digital biology," into our increasingly biologically-inspired computing systems. Folks like Peter Bentley (On Growth, Form, and Computers) were some of the first to figure this out at the beginning of the century.
Let me be a bit abstract here, because so few people understood this just 30 years ago. When you think of human beings as information processing (IP) systems, you can evaluate them in relation to functional categories in information processing (IP), like: Input, Storage, Processing, Output, and Networks. It turns out that in individual human organisms, almost all of these categories (networks are the only exception) are already tuned, from the bottom up, to be operating at near-maximum capacity. In other words they have all been "developmentally optimized", over long evolutionary timespans. In a simple model, you might think you could significantly improve human performance by adding a new input, say, X-ray vision. Wouldn't it be great to be able to see through walls?
Today, we know that there is always a significant cost to specializing humans in this way, one that is less and less worth paying the more empowered people become, and the better our machines become at doing the same task. Human nervous systems are deeply tuned for the kinds of sensing they do, and they don't adapt well to different inputs without losing generic functionality. Attempts to add new input capacities to the brain, as in sensory substitution therapy for disabled individuals, are never as effective as the original systems they attempt to replace. It is actually the careful filtering, or throwing away, of sense data that creates intelligence, and our brains are already tuned for the maximum sensory input their slow-switching, delicate bioneurons can handle. That is why the human thalamus, the main way-station for sensory input to the hippocampus and later, the cerebral cortex, throws away over 95% of the sensory input that flows to it.
In a related example, it's also why plants throw away as much as 98% of the solar radiation that hits their chloroplasts in the process of photosynthesis (the making of sugars from sunlight). If they didn't radiate away almost all of that energy right at the outset, they would blow themselves apart, because they are built out of delicate peptide-bond biostructures. But if a solar panel is made out of metal instead, it no longer needs to throw away that energy to stay in one piece, allowing it the possibility of becoming almost 50 times more efficient.
To make a long story short, you can't do much to improve those five functional IP categories in biological humans, but you can dramatically boost all of them in our technologies, and by extension in human society, in the emerging social computer that is now being created by our information and communications technologies.
You can see this most obviously in the better performance every month of our digital twins (DTs), the online software agents that act as interfaces for us on today's internet. (More on DTs a bit later). Besides infotech, this area is sometimes called sociotech (social technology, socially-aided HPE) and it is the prime growth area of my generation.
Once our HPE programs moved to building tools instead of trying to tinker the biological human, the next question became whether to work on intelligence amplification ("IA") (tools that increase human performance) or artificial intelligence ("AI") (tools that increase machine performance). But as our machines become more life-like and closely integrated and with us every day, we have learned that these two paths increasingly end up being the same thing.
Consider how a good tool for searching environmental information, like Google, makes both us and our machines smarter at the same time. Is there a meaningful difference today between smart humans and any of the smart "prostheses" (technological appendages) that are designed to be seamless with humans? Could we survive, in any sense we would want to, without our technology today? It has become an organic component of ourselves.
Much more important than whether we use IA or AI technology strategies is the way that new technology impacts the human environment. Here are some of the questions we ask when we are planning our Army development projects, which I'll get to a bit later:
These are all social factors that can be quite difficult to address, but we do our best to deal with them because our best answers to these questions may be the key to good development. As we will come to see, after security and futuring, development is the most important thing the Army does these days.
B. Some Dominant HPE Technology Sketches
Now let's take a brief look at three HPE technologies transforming life here in 2032.
Let's consider first the leading HPE technology of our day. This is probably the one that most folks would say is the biggest single change in life today vs. the turn of the century, and one that wasn't generally expected to happen so quickly or broadly. I'm referring to "talkware," also known as the conversational interface, or "CI". Let's see if we can understand why it has become both the prime mover of our digital society and one of the Army's greatest global security and development priorities.
The CI is the natural language front end to our increasingly intelligent internet. CIs began in primitive form in the late 20th century within interfaces like Google. They grew rapidly after the turn of the century, and are one of the few technologies out there that actually lived up to the predictions of the futurists. Remember the Google Avatar OS of 2015? It was the first operating system (or "universal browser") that really allowed us to talk to our software at a level where we could get things done. It also included early versions of avatars, sophisticated software simulations of human beings that were the centerpiece of the interface. These avatars were surprisingly good at speaking to us in a simple "pidgin" language that we all quickly learned to understand, primitive though it was at first.
As a few computer scientists predicted thirty years ago, a field called statistical natural language processing (NLP) was the main thing that computers really needed to be able converse well with human beings. Once statistical NLP systems were harnessed to our rapidly exponentiating processing and communications hardware, and our proliferating context-specific storage platforms, something that started in earnest in the late 1990's, they began to noticeably with each passing year (see Christopher Manning, Foundations of Statistical Natural Language Processing).
It turns out that, in order to talk intelligently to us, computers never needed to understand the words they used, only to have access to a very large record of all the things humans were saying to each other, in all the high-probability human contexts, around the world, and to be able to crossindex all that information to find the most common emergent patterns. By the late 2010s the internet was large enough, and computing power and databases advanced enough, for statistical NLP to start moving into the realm of human conversation. In 2004 people were "talking" to Google two hundred million times a day, with an average of 2.4 words per query (typing them into their search bar). By 2017 this conversation had moved up to an average of eight words per query, mostly spoken words, with the results better than ever before. It felt like natural conversation, albeit with a fast-talking but slow-witted assistant, from that point forward.
Our CI conversations get even better whenever we run personal "lifelogs", or automatic audio or video records of everything each of us says, both to our computer (by text or voice), and to our friends when we are in "public" mode (which is most of the time these days). These systems were pioneered by the military and a few other folks back at the turn of the century, but "lifelogging" and sharing of public life experiences among "symbionts" (groups of people with extensive access to each other's personal logs) became popular with kids around the world in the 2010's.
Lots of adults still don't use lifelogs much, as there is a trust issue involved, but I run mine 24/7, except in zones where privacy protocols turn them off automatically, of course. I think it makes us so much more productive and reflective to have video and audio of our past experiences available at any time. It only takes a quick voice query and short conversation with your CI to call up an audio or video scene for review or sharing.
The massive data sets represented by our effectively infinite internet and our growing personal lifelogs, in combination with surprisingly simple algorithms, have allowed a bunch of context-dependent higher grammars to emerge. These grammars were poorly constructed in the first talkware systems, but once millions of people began talking to all their technology (computers, cars, phones, houses, tools of every kind) through the CI, praising, criticizing, and getting feedback, we rapidly "pruned" the responses so they made better sense, to each of us. Talkware became valuable to all of us much more rapidly than anyone expected.
Here in 2032 we understand that the CI catalyzes two new capacities of major social and personal importance:
With the help of the CI, we are beginning to understand the kind of dialog that champions diversity whereever there is no signficant social cost, the kind that builds and preserves trust, even when so many of our social problems still have no immediate, easy solutions. When people are understood, they can agree to give each other space to disagree. Achieving the understanding and trust is half the battle, and the CI enhances that like nothing we've seen before.
Some people call this just a new type of "political correctness," and youth everywhere love to rebel against it, but it's pretty clear that with our CI's help, we are all "learning the words" we need to speak to each other that advance both the common and the unique values we care about, while protecting human rights and dignity. It's a very exciting time to be alive.
This next advance in many ways is even more revolutionary. When Google's Avatar OS emerged, it became clear that we were no longer content to simply talk to our computers as though they were disembodied machines. We wanted to relate to our favorite virtual human beings, and to choose from a broad range of possible personality types. We wanted this because human-like agents/avatars have an ability to communicate nonverbally with us, to frown or place their hand on their chin until they understand what we were telling them to do, to smile when they detect we are smiling at their jokes, to talk and act in a calm and relaxing manner when their voice analyzers tell them we are upset, to speak more rapidly when they see we are bored or hurried, etc.
Having a nonverbal visual channel running in parallel makes all our linguistic communication a lot more efficient than talking to a disembodied CI. It's why human beings prefer face-to-face meetings over telephonic meetings for a wide range of interactive tasks. In order to do this well, our CI-equipped virtual avatars began to capture the general features of human personality (so called "personality capture"), as well as the specific features of the users who were running personalized software agents, or digital twins (DT's). To facilitate our ability to understand and benefit from them, they began to model human preferences and display human emotion and body language.
Due to the highly repetitive nature of human behavior and the rapid increase in computer capacity, they are learning to do this with a speed and consistency that is astounding. Both talkware and DT systems get measurably better every month now, just like the cognitive capacities of young human children, and these systems are upgraded constantly behind the scenes, with no hassle to the user.
Today, my twin has a surprisingly good understanding of my preferences, and I'm coming to see it as a bona-fide extension of me. I know it doesn't have anything like my kind of "consciousness," but I'm continually amazed when it tries to complete a sentence for me (often correctly!) when I'm tongue-tied, or when it correctly understands my mood and inner thoughts and feelings. Some people (not the children, of course) find this invasive and don't like to run personalized DT's. But almost everyone uses generic avatars to get the most out of both technology and life in general.
The great thing about having a DT is the insight and productivity you gain. If you are trying to learn a new skill, change a behavior, or even change your personality a little bit, your DT can introduce you to a whole range of programs involving incremental changes that actually work. If you let them, your DT will roam the internet for you, 24/7, and learn new things that you would actually want to know, bringing back options for numerous social and business contracts, which are increasingly the kinds of things you most want to do with your precious time and resources.
Today, people are beginning to feel that their software twins are their best companions, coaches, teachers, and protectors. The level of trust has gone sky high with the average unsophisticated user, but the reality is the technology still needs a lot of work. People still try to game the DT systems, even sabotage them or use them to harm others. That's where the national security apparatus (police, FBI, DoD) come in.
Building immune systems that keeps everything working together well is key. This is all reminiscent of the problems we had with spam and computer viruses back at the turn of the century, but the current issues are even more socially valuable by comparison. Lots of people would feel naked and stupid today without their DTs, and they become very unhappy whenever they are compromised.
What is the Army's unique role in this process? Preventing cyberterrorism is one of our top responsibilities in the modern era. The DoD, through the Joint Military Group, does all the first level development testing of our next generation networks, which we try to make as secure as possible. We also use existing networks for our wargames, looking for unanticipated threats.
We have a huge, classified section of the net with trained folks who intentionally try to wreck what's working today, and others who try to track them down and prevent the damage ahead of time. Army soldiers in many specialties play a big role in this process, and they collaborate with soldiers in other countries for worldwide campaigns. Other security agencies use these resources too, but only the Army fields the numbers of classified players needed to run our biggest security and attack ops. All this data is reviewed constantly to keep the new internet running well. The system works rather well, as there have been few major disruptions. For my part, I enjoy being called in to do my little bit whenever I can.
Back in the 1980's the technology futurist George Gilder introduced us to the microcosm (the world of the computer chip and the new economics it unleashed, as described in Microcosm). In the 1990's and 2000's this enabled the telecosm (the all-connected, telepresent world of the internet and wireless, as described in Telecosm). But that wasn't the end of the story. In the 2010's, increasing sensing and storage of physical space relationships (RFID, web services) enabled the emergence of the datacosm, a world where the data about things became one of their most important attributes.
In the early 2000's the futurist Bruce Sterling told a story about the purchase of a broom, and how the haze of data around the broom, accessible on scanning it (at the store, at a friends house, etc.) had become more important than the broom itself. This data includes information like where you can find equivalent brooms for sale (in your local area, for overnight delivery, for the cheapest price, etc.), what customers and critics have to say about the broom, what brooms sell best to people like you, etc. Sterling was describing what some today would call a mature datacosm, linked to some simple human values, like "having a good broom." In the datacosm we begin to become more interested in the data about many material objects (what futurist Sterling calls "spines") than in the objects themselves, which are simply their current version of physical instantiation, with almost every type of objects being continually refined by social feedback (with regard to an object's attributes, distribution, cost, method of manufacture, etc.). Today's agents can use the datacosm to help us with just about any choice we'd like to discuss.
Today, we are seeing how the datacosm is leading to the emergence of something even more interesting. With semi-smart avatars representing us on web, we are creating detailed, quantitative and qualitative records of the choices we make about our lives, both major and minor. This data represents the beginnings of the valuecosm, which is all the ways our choices express our values and goals, and the ways we use to measure our progress toward them.
With this new infomation, our avatars are learning how to look for ways to maximize the future value of our choices, both for us individually and wherever possible, for our associates and for the wider world at the same time. Not only things, but even our values now have identities in the datacosm, as extensions of our own and other people's intentions. With values data increasingly accessible to our avatars, their advice is becoming significantly more useful across the board. Not only can we get good information on the more objectively quantifiable datacosm-type choices discussed earlier, but even in subjective and qualitative realms (style, humor, ethics) the valuecosm is beginning to give us feedback that is helpful to the decisions we make.
Potential for abuse of valuecosm technologies is quite significant, particularly as small, highly motivated and radicalized groups can find each other and make common cause faster than ever before, so there's a lot of oversight in these areas. A whole new area of law has emerged to make sure that DT systems give advice in ways that improve the education and decisionmaking ability of the user. This is a complex and controversial subject I could talk about at length, but let me just say that what the government wants to see today is DT systems that educate people to make better decisions for themselves, even in light of the increasing "intelligence" of the valuecosm.
Looking to the future, we can see that individuals are going to become a lot better educated and opinionated over the next generation than they've ever been before, which can only help our democracy. You could sum this up by saying the emergence of the CI has allowed us to begin to reform education for the first time in 150 years, globally democratizing it and freeing it from the monopoly of classrooms and human teachers. At the same time, the emergence of the valuecosm has reemphasized the value of the human teachers who can help us to become independent thinkers, and of education systms that help us to make decisions for ourselves in ways we never did before, and to understand the benefits and limitations of the advice we get from others, whether human or computer. The future of education is a very promising one these days.
Back in 2000, systems like wikipedia, open source software, social networking and consumer rating sites were early starts toward collectively building the valuecosm. These systems grew in value every year but they really couldn't take off until we had a semi-smart talking digital interface, guiding us through the maze of choices to some of the best ones available. That was the last major piece missing, until the early 2020's anyway.
Game theory tells us that there are natural optima for "positive-sum" behaviors that allow us to grow the size of the pie while getting our slice too. These are the kinds of choices the valuecosm is best at computing, even where we don't see the options ourselves. There are also choices where individuals readily see the value of sacrificing personal goals for the greater good of the whole. This is Robert Trivers' (Natural Selection and Social Theory) "general altruism" in action. The valuecosm is sometimes helpful to us with these choices as well.
But it is also true that there are a number of values (like the safety of country and family) that are so important to protect that many of us would be willing to die for them. Immune systems and battlefield heroes work in much the same way: individual cells risking and sometimes dying for the benefit of the system being protected. With such extreme choices, the valuecosm is presently the least helpfu. In such circumstances, any good avatar will be silent and tell you to decide for yourself.
In fact, if our avatars don't quickly learn how to shut up at the right places, if they act in ways that seem to incite hatred or conflict, that's where the lawyers and government steps in to make sure a recall occurs. In general, by careful pruning, we are building an amazing system, consistent with national and international law, that helps us all toward our personal and collective goals in better ways every day. Like the electrical grid, the system is "unreasonably effective" if you think about it. You would expect it to break down a lot more often than it actually does.
Consider the first sentence of Leo Tolstoy's Anna Karenina: "Happy families are all alike [read: developmentally optimized], unhappy families are unhappy each in their own way." Psychologists have long known that in happy families, much of the conversation is about figuring out ways to do unexpected good for the others. This dialog often seems inefficient in early stages, but when families get really good at it, they do more and talk less (or at least, they respect the desires for some family members to talk less).
In a happy family, everyone knows the others' declared values even if they do not personally share them, and tries to help them come closer to what they want. Such family members are also generally good listener and advisors when any individual is engaged in a values reassessment. In other words, happy families encourage individual growth, and the better we become at optimizing our earliest goals, the more we learn to pick up new goals and values that better fit with our new capacities and the wider cooperative world.
'Happy' networks work the same way. The emerging set of values our avatars learn from us are allowing us to build happy networks, which look increasingly alike (they have found a developmental optimum), even as they contain unique and independent individuals.
As the valuecosm is better defined and our DTs become better at interpreting it, we are learning to look more often at the past and future value of all our choices. Not only our assets and investments, our insurance policies, our net worth, our health, but also our more intangible life choices are increasingly scrutinized and assessed. We don't do this with any great precision, but with a lot of probabilistic sampling. Because we have a clearer way to see how our present actions might influence our long term future, we have more incentive to make better choices now.
In practice, human nature being what it is, it's usually our DTs that are most motivated to maximize value. In moments of conscious resolve we tell them what we want. The rest of the time we often act like undisciplined kids, who keep trying to break the rules, while our DT's patiently motivate us to follow the increasingly obvious positive-sum paths on the emerging values landscape.
Personally, I notice my DT is ever more artfully coaching me in what to do and say, and helping me to understand successful ways to think and feel. All this makes us feel like dummies sometimes, but none of us wants to be without the advice, whether we choose to take it or not.
How did we arrive at this astonishing place, so quickly in this new century? Why do a special few technologies, like infotech and nanotech, continue to astound us in their acceleration, while most, like biotech, cognotech, space exploration, energy, transportation, desktop manufacturing, and so many others fail to live up to the hype?
Back in 2000, no one understood why computers were doubling in power every year, learning at electronic speeds, which are tens of millions of times faster than biological speed. Or why they get better every year at making new versions of themselves, with less and less human help. Or why cosmic and earth and human and then technology history had each run incredibly faster than what came before. A twentieth century astronomer named Carl Sagan (Dragons of Eden) noticed this continual acceleration and called it the "Cosmic Calendar." He said it was an unfinished puzzle of science, and that someone would eventually figure it out.
That someone was Clive Ramanja, who showed in 2023, when I was thirteen, that computational acceleration is built into the physics of the universe. Today everybody calls him the "Einstein of Information Theory." He basically invented the field of developmental physics, which has drastically changed our perspective on complexity science. Among other things, developmental physics tells us how performance develops in all complex adaptive systems, humans included, so it has been very helpful to our performance enhancement priorities.
At first, few people understood the new paradigm, which said that everywhere in the universe, local intelligence was going from physics to chemistry to biology to technology to the cyber world, in a process that begins in outer space, at galactic scales, and ends in inner space ("the metaverse"), in the world of super-fast, hyper-realistic simulations run on a substrate of physical structures that are very, very small.
But the predictions from this new discipline get dramatically better every year. It looks like the universe is specially structured to allow continually accelerating computation, and developments at the nanoscale continue to reaffirm this perspective. Today, many institutions take this outlook seriously, and this is particularly true for the Army, which has prided itself as a founder and leader of modern futures work, ever since Army general H. H. "Hap" Arnold formed Project RAND (now the RAND corporation) in 1945.
Like thermodynamics, another kind of "statistical" law of nature, developmental infodynamics predicts that the leading edge of Earth's intelligent systems always figure out how to use less Space, Time, Energy, and Matter (so-called "STEM compression") to do computing. Because of this fundamental trend, the acceleration never stops, and the leading edge of computation, as opposed to specific computational systems, never runs into limits to growth. In addition to STEM compression, infodynamics tells us that increasing intelligence, interdependence and immunity are properties that have to emerge in all the most successful systems as they develop.
All this has been oversimplified in the press as "doing more, better, with less resources." If you can't see how to use continually less matter, energy, space, or time (physical resources) in your scheme to improve human performance, then you aren't operating at the "leading edge" of the tidal wave—somewhere else on the planet things are flowing much faster and more efficiently, and will soon change your game.
A greater ability to use STEM compression, sometimes also called "computational resource efficiency" is why infotech and nanotech have continued to surpass biotech and cognotech. Wherever you can create greater space, time, energy, or matter ("STEM") efficiencies within an organization, a supply chain, a product or a service without losing any of its essence, you should do so, because if you don't, someone else will soon. All this irreversibly moves the most successful local environments into greater and greater zones of STEM compression.
Some of the operations research/management science (OR/MS) practitioners understood this principle in a qualitative sense for years. But since developmental physics has provided a quantitative analytical framework, Army OR/MS, both for logistics and all our strategic planning, has finally come into its own, and is increasingly redefining the way we measure STEM efficiencies. We can increasingly use operations research to help us maximize both technical efficiency and the way our technology copes with human limitations.
For both human and machine decisionmakers, we look for ways to shorten our perception-action loops, for every performance we care about. Before Clive Ramanja made the field respectable, ecological psychologists did a lot of the theoretical work on perception-action cycles in the twentieth century. So did a few lone innovators like Colonel John Boyd of the USAF (Boyd), who taught his "OODA loop" model (observation, orientation, decision, action) of the perception-action cycle. As Jason Jennings said back in 20C, "it's not the big that eat the small, it’s the fast (and efficient) that eat the slow (and sparsely connected)."
Driving all this acceleration are innovations in the microcosm, which continues to yield stunning STEM efficiencies. DARPA, through its global partnerships, continues to fund the best research work on the planet in fields like nanocomputing, optical computing, quantum computing, and even femtocomputing. No one expects this train to slow down any time soon.
Another big paradigm change is that in the old days everyone talked about "evolution." Now it's always evo-devo, or "evolutionary development". Charles Darwin got his science mostly right, but as we know today it's only half of the science we need to understand change. Both evolution and development need to be understood, at universal scales, to understand the full story of universal change.
Evolution is always random and unpredictable. Development is the opposite: it's all about the changes that are predictable, like computer acceleration, because these changes have to do with the discovery and use of preexisting optimizations that are hidden in the possibility "phase space" of the physical universe we live in, due to its unique and invariant laws and properties. You need to consider both evolution and development in order to really understand the future.
Today, simulated worlds in fasttime, in inner space, are where most of our best scientific experiments take place, except for occasional slowtime experiments and data collection to prove the models. Our "sims" are getting so good that we now understand most of the physics, and a good deal of the chemistry and biology, that created us. What comes next, where we go after inner space, no one really knows yet. That seems to be the next frontier.
Knowing all this doesn't guarantee that any particular institution or nation will remain on the leading edge, as the most successful system, but it does give us attributes to measure in the process of improving human and social performance. That's what I do.
Since Ramanja's work became widely accepted in the late 2020's, the Army's new model tells us to look at both humans and their society as information processing systems, and think of all the ways we can design a global technological environment that creates more intelligence, interdependence, immunity, and STEM compression than any other strategy available to us. As a result, in addition to other goals, the best HPE surveys today attempt to measure progress on these four developmental trends.
It's gratifying when the OR/MS simulation of a particular development can be shown to advance all four of these trends simultaneously. Events rarely develop so cleanly, of course. Nevertheless, the theories of developmental physics and the empirical approach of OR/MS are among the best tools we have to predict and manage the stunning changes we see daily in our extraordinary, computation-rich environment.
D. The New Military Priorities: Immune Systems and Resilience, Foresight, and Development
Here is a brief overview of three Army priorities that guide my work. Each works to create more global transparency, which is today the great goal of world security. One of the few benefits of asymmetric warfare has been that it is the prime catalyst for more transparency, so we are solidly on course to a much more transparent future.
Developing immune systems that protect against violence to the citizen and the state has always been the most central role of the military. In recent decades we have learned a lot about how well biological immune systems work in the same role in living organisms, and how organizational, social, and species resilience works. We try to apply this knowledge to everything we do.
Even as it is difficult to quantify economically, many scholars believe the most valuable global export the U.S. has created over the last thirty years has been our market-dominant and evidence-based security paradigm, as it has allowed both global economic growth and social resilience to advance apace. Though any one individual can in principle do more damage today than ever before, in practice, statistically speaking, we are measuring less violence, crime, and uninsured losses of life or property in almost every country on the planet. And we have increasingly more effective containment plans, on average, for the trouble zones.
Immunity and resilience theory tell us to think of all the security-seeking humans on the planet as "white cells" in a global immune system, and we are learning how to enlist them in hundreds of ways worldwide. There is always a statistically tiny fraction of abnormals and extremists by comparison to the mean, and good immune systems always leverage the immense power and diversity of the collective, and the vast majority of common sense nodes in any network, and to use that network to ensure increasingly better surveillance and rapid regression to mean after any disruption.
Transparency and interoperability are two of the keys to smooth functioning in networks. Do you remember when most parts of the internet were considered an anonymous yet public commons? No wonder the early internet was so abused, just as fishing in international waters was before sea farming rights came in vogue. Ask your parents or grandparents about the days when spam was more prevalent than email for most folks, because there was no accountability and no cost to sending it. As the internet became transparent, and all of us had to 'authenticate' in order to enjoy the privilege of being in the network, the problem disappeared. What spam, viruses, and intellectual property theft did was to stimulate the emergence of robust immune systems, and they took some time coming.
All networks are based on interoperable standards, useful rules and regs of interaction, so we do a lot of work simulating the best standards we can devise, ones that are neither too simple nor too complex. With proper transparency we also try to make it as easy to enforce the standards as possible, otherwise, they quickly lose value.
Without proper public standards and transparency, corruption and conflicts of interest abound. Look at Singapore, which developed from a third- to first-world nation, over one generation, back in the 20th century. You can see how important fair standards and transparency are for economic growth, even if, as in this case, it was an autocratic rather than democratic form of transparent capitalism that efficiently eliminated the corruption. Even in 20C, leading futures groups like RAND saw the central importance of using market mechanisms to eliminating corruption and create strong immune systems (Robert Klitgaard, Controlling Corruption).
Having a mature, fair market for any public or private good is another example of a standard that enforces healthy immune networks. At the turn of the century many public policy groups measured market maturity and fairness (see Hernando De Soto, The Mystery of Capital) but even then people didn't understand just how important these metrics would become to creating safe and friendly environments worldwide. Today, due to the valuecosm, the development funding that is provided by the Army, other government agencies, and non-governmental organizations worldwide is usually tied to corruption, market maturity, crime, and other indices. This ensures better competition for funding and better index ratings, and helps development dollars flow to the places where they will be best employed.
Some cultures figured out open secure standards early, like Japan with their cell phone standards allowing property owners to automatically turn off audible ringers (not lights or vibrators) in their places of business, except for security-cleared phones. All the manufacturers had to engineer with these protocols or they couldn't sell to distributors. When complex robots (Aibo, Asimo, and their progeny) became powerful enough to be teleoperated as weapons in the early 2020s, the DoD ensured that all manufacturers included GPS-based disable codes which keep the robots from working in various locations (schools, government buildings, etc.). Again, those that didn't agree to the new standards were forced out of the game. The same kind of location based systems emerged a decade earlier for radio-control hobbyists.
The military has rightly become the ultimate arbiter of the security of our network standards, and their need for this became clear to the public with a few high-visibility kidnapping and terrorist events in the mid-2020s. Before fully transparent and location-zoned wireless networks, it had become too easy to attach a small, unobtrusive teleoperated device to someone to coerce them into some action, or face remote-controlled harm to the individual or their family.
Today, all players in the global international arms industry have been asked to add transparent networks to all small weapons sold, and passive local positioning ID's to all ammunition. Most are agreeing, and those few that aren't will soon be brought into the fold. Fundamentally, they have no choice, as this is a global security issue and time is on the DoD's side. Buybacks of non-networked weapons are proceeding well, and we can all forsee a day when it will be very unusual to find any weaponry outside of museums that isn't transparent to the global security network.
What about global immunity against bioterror? Human-made super-viruses turned out to be significantly less dangerous than we feared. The bottom line is that our biological immune systems have always protected us, as a species, tremendously well against these simple invaders. Every plague in history occurred because of "differential immunity" that emerged between certain well immunized groups and other poorly immunized ones within the same species. That is why no pathogen in history has ever killed a species. In other words, in the history of life, immune systems always win, and the better our biologists understand them, the better we become at defending ourselves against everything that comes.
One of the most important and least-known facts about the military is that it has always done deep futures work. The Army started the modern futures field with Project RAND in 1945, and after a lull in the 80s and 90s began again to turn to systematic futuring in the 2000s as the pace of change continued to accelerate, and as the traditional enemy continued to evaporate. Then in the 2s, once Ramanja demonstrated that many developmental changes were statistically predictable, the field of developmental futures took off, and now it informs everything we do.
Most change, of course, is evolutionary, which is why prediction has such a dismal history. But the special subset of trends and events that are developmentally inevitable (or "statistically highly probable," in analyst-speak) are becoming better understood. The combination of evolutionary and developmental contributions to change, and the ability to discriminate accurately between the two, is called Evo Devo Foresight, and academic groups such as the Evo Devo Universe research community, started in 2008, are increasingly grounding and validating these investigations. Developmental physics is certainly not as easy to do as classical physics (such as the motion of planets) but it is still tremendously predictive in its own right.
In addition to prediction, the military wants to create the future of defense, which means they are always trying to trim the lag time between innovation and diffusion of the Next Great Idea. As Everett Rogers showed back in 20C (Diffusion of Innovations), technology diffusion is always very sensitively dependent on resource, social and competitive conditions in the environment, and both innovation and diffusion can be greatly accelerated, in ways we understand today better than ever before.
Did you know, for example, that "water closet" (non-flush) toilets were invented in 1596 (by John Harrington) but even at six shillings 8 pence, which was cheap during those times, they didn’t make it into wide use for 200 years? That was because social customs regarding hygiene weren't ready for the invention. Without the impetus of social catalysts toward cleanliness, it would take two more centuries before people found the portability and ease of cleaning a water closet to be worth the extra effort of filling the bowl with water before use. Then, only when we had developed widespread use of water closets, was there an impetus to invent a new water network, modern sewage systems, in the 1890's. This network in turn allowed another innovation, flush toilets, to become the new standard. And so goes the dance of evolutionary development, in a stairstep process, often three steps forward, two steps back, until suddenly the world everywhere has permanently become a better place.
In the same way, a good futurist knows that most of tomorrow's innovations have already occurred (or as the 20C writer William Gibson said, "The future's already here, it's just not evenly distributed yet"). These new innovations are sprinkled all around the planet right now, as prototypes and sparsely implemented ideas, patiently waiting for enough resources, social approval, or a new network to make them irresistible. Whoever can predict the kind of networks that must emerge, shape the social factors, and provide R&D resources to the right groups, can greatly accelerate the future. Lately DARPA has a pretty good record doing the latter with the HPE tech I've mentioned so far.
This work has brought some new clarity to the futures profession as well. For example, when we began to scrutinize the failure of twentieth century HPE visions like human exoskeletons, VTOL transport, brain-machine interfaces, genetic engineering, organ generation, 3D printing, and several others, we learned that most of them could never have advanced much beyond prototype stage in the increasingly transparent and efficient digital environment of the twenty-first century, each for different reasons. Some utilized STEM compression less dramatically than people originally thought, others were outcompeted by more efficient digital solutions, some violated social immune systems, and some had no network that could use them, or had other basic strikes against them.
For logistics and futuring, once we had better developmental vision, it became evident that what the military really needed was a better ability to see the "envelope" of the most probable futures, not to build for all possible contingencies. Everyone knows there is always a combinatorial explosion of possibilities, and the world can only afford so much defense. Most inventory in the supply yard never gets used after its built. What we needed most was not the end product but the knowledge of what just-in-time inventory might be needed under what circumstances. That's what the Army focuses on today.
Historians remind us of the momentous change that occurred in 1989 when "peace broke out" in the Soviet Union. This was perhaps the biggest political event of the twentieth century, and developmentally inevitable, if you listen to folks like Ramanja. Developmental strategies for arms reduction and quality of life improvement in the disconnected gap (all the worst "have not" countries) are what some of the strategists call "peacefighting" these days. This paradigm, shrinking the gap, has been the final frontier for national security since Thomas Barnett (The Pentagon's New Map) identified it at the end of the twentieth century.
My current job as a human performance analyst is to help evaluate whether societies in emerging nations feel both safer and more empowered this year versus last year, and if not, what the biggest roadblocks are. We send our status reports up the chain of command and every so often major new infrastructure development projects are proposed at the top. Most of them are targeted to emerging nations or hot spots, but at least 20% are launched here at home. When there is extralegal social resistance to these projects within hostile environments (read: guerilla warfare, sabotage), the Army deploys resources to ensure the development proceeds as planned.
There are all kinds of political dimensions to the development deals: who gets them, how they are structured, what kinds of grandfather clauses exist. Nevertheless, the general trend is quite obvious: increased planetary value along the Ramanja trendlines (intelligence, interdependence, immunity, and STEM compression) every step of the way.
The Army has always had stunning developmental capacity whenever it chooses to use it (think Hoover Dam, Tennessee Valley Authority, etc.). Today, using that capacity selectively has become its greatest mission after defense against the unknown (e.g., building immune systems and futuring). As both traditional and terrorist foes continue to disappear worldwide, scenario planning with regard to predictable trendlines and unpredictable threats, and development partnerships with outside organizations (USAID, UN, G25) may be the most important new responsibilities of the world security apparatus. Both futuring (scenarios, trend analysis, intelligence) and economic and security development are a great way of leveraging our primary mission, which is global defense. Even as it downsizes, the Joint Forces Army, led by the U.S., remains the largest single defensive organization on the planet. In a world of ever shrinking militaries, no one wants this to change, either.
Because of our scale, the Army and the DoD are uniquely suited for development projects that no one else can tackle. Many of our largest projects have involved a redefinition of the concept of defense, just like the Army hydro projects back in the 1940s. Whenever we can do this globally in a cost effective manner we have been overwhelmingly praised by the world's citizens.
Recall back in 2019, when it was realized that only the U.S. Army had the capacity to develop fire line explosives (FLE's) big enough to prevent all the large forest fires in the world from threatening property. Precision delivery of munitions to the fireline allowed the creation of firebreaks of any size, under any circumstances. When the Army created a domestic technology transfer program, and later formed an on-call partnership with fire departments worldwide, that slight expansion of our traditional concept of national defense ended up saving the planet billions of dollars in property destruction annually ever after.
Think also about weather control. After the successful tests of 2028 we are now finally on track to stopping our worst annual hurricane, monsoon, and blizzard damage worldwide, because of DoD-funded space-based microwave satellite technology. We had long predicted that gentle, localized heating from space would greatly decrease the severity of our worst storms and allow us to gently steer them away from landfall, but it took the resources of the world's leading defensive organization to create a global system which, controlled by the Joint Forces and in partnership with NASA and NOAA, now promises to save tens of billions of dollars in property destruction annually.
Every person who has a CI-equipped cellphone today, which is just about everyone, knows that development for emerging nations is no longer a question of "if," but "when." People in emerging nations ask, "When can I get the latest cool thing I see on the holonet?" Today, people everywhere are conscious of the limitations of their lifespan. They want their stadiums and parks and latest entertainment tech now, not ten years from now, which they know would translate to a significant reduction in quality of their lives.
We've found that when people know how they can actively help us to make their neighborhoods safer and cleaner, and when they get immediate tangible rewards for doing so (what ops research folks call "measurable exponential value"), it’s amazing the response we get. The filthiest, most crime-ridden communities can clean themselves up in weeks or months, as long as there is a digital network in place to auto-manage and positively reinforce their contribution.
Today, folks in the biggest "problem cities" worldwide get to vote on what kind of new chain store, service, or entertainment they want in their neighborhood. It is encouraging when they see the number of planned developments for their community visibly expand as soon as any corruption or crime index goes down, or a community group does something to help.
Population decline continues to be the greatest demographic problem facing the developed world. Even as child raising has become a much more socially valued activity, today's parents still aren't procreating enough to reverse the notorious "First-World Effect," the fifty years of population decline we have seen in industrialized nations across the planet. Today many nations pay couples a small salary to induce them to raise children, and of course parents have more help from their DT's with childraising than ever before.
The new diagnostic systems emerging within the valuecosm also help kids choose specialties that have the greatest potential for maximizing their talents and solving social problems. Top-performing kids come from everywhere, but it goes without saying that for decades the emerging nations have provided most of the best recruits, not only to the U.S. Army, but to the valuecosm as a whole.
Emerging nations youth still don't have many of the tools and toys that the developed world has, so they aren't distracted by affluence, and they are willing to work very hard to bring themselves up to our level of lifestyle. Of course, our productivity and lifestyle is advancing at the same time, but by a smaller rate than theirs, so that the relative gap gets narrower with each passing year.
In sum, it's worth observing that networks, like immune systems, always win, at least until they are replaced with even better networks. Recall how the British lost virtually all their colonies in a pre-digital age. By contrast, since 2008 the U.S. hasn't lost a single one of its strategic relationships for longer than a year, because the relationships are so much more positive-sum within our emerging high quality global network, the valuecosm.
Some folks say networks have had to become more powerful because individuals have more power to affect the whole than ever before. Game theory predicts a point where highly empowered individuals simply can't be allowed to operate in the larger world without being plugged in to and autoidentified by an equivalently powerful network. Army futurists see an even more transparent public network ahead (classified nets will of course continue to exist on top of the public network, but even they become more accountable), and cleverly developing it is a top priority.
E. The New Soldiering and Strategy Environment: Metawars and Wargames
Due to the increasing influence of the valuecosm, and the lack of any enemy which can oppose us in major conflict, the majority of warfare logistics today are focused on post-engagement development. Virtually no warfighting takes place between governmental organizations these days. The battles are breathtakingly short, and the postwar development programs are where almost all the strategic thinking and political effort goes, usually even before the war begins. Most warfare is telerobotic, and simulation-optimized, but this really isn't my specialty, so I'll say little about it. I refer you to others who will tell you that the modern theater of war has very few humans in harm's way in during initial engagements, and a large number of warfighting superspecialists operating in symbiont networks on the other end of all those lethal and non-lethal machines.
The rules of engagement have become so development-friendly that we no longer deploy equipment, for example, including mines and ordnance, that we can't autoidentify for rapid recycling later. G25 Joint Military Group agreements now even allow us to charge formidable cleanup costs to any nation or combatant NGO who doesn't adhere to this standard.
Worldwide, policing bots handle at least 10 times more conflicts than warbots (with the exception of wargames, of course). That ratio is expected to climb steeply from now on, as all the indicators show a safer, more inner-directed populace is emerging the more transparent the world becomes, and the more options we all have for personal development. A few scholars saw this happening globally even in the 20th century (Ron Ingelhart, The Silent Revolution) but today its obvious to just about everyone.
As most people today know, occasional physical and mostly virtual simulations are the future of learning (Clark Aldrich, Simulations and the Future of Learning). The U.S. army is the largest educational system in the world, so they've really taken to simulation space for training and evaluation.
Today, in addition to postwar development, wargames greatly exceed warfighting in strategic importance in the military chain of command. This is unsettling to many old-timers, but seems part of an inevitable trend. Today's wargames deeply model and influence the way we conduct war, and occur continually across the globe, while warfare is scarcer and briefer every year.
One of the most strategic things the DoD has done in recent decades, at the direction of the U.S. government, has been franchising their entire defensive and wargaming system to many of our political allies. This line of thinking began with strategic base treaties negotiated in the 20th century, but when we opened up our training and wargaming infrastructure to all allied countries, and then created a multinational first response force from the recruits, the world's youth took to this more broadly and deeply than anyone thought possible.
Just like the football dads and soccer moms who enroll their kids in kinesthetic day care at the age of 3 and Pee-Wee football leagues at age 5, there are now a whole range of training simulation programs, under DoD license, that nurture a growing contingent of warfighters, development specialists, and crime fighters early on, and help them improve their performance in both physical and virtual environments.
America's Army, the early twentieth century recruitment video game is now a globally franchised persistent world, and the best performers are allowed entry to U.S. forces either virtually or physically. After a tour of good performance combined with local development work, trainees can apply for fast-track naturalization to become a U.S. citizen under a new security-enhancing visa class. They may also receive small stipends, grants, and foreign aid, as well as other global travel and semi-citizenship benefits from participating JMG countries.
In recent years, just as the Olympic games have long been known as a place where nations peacefully lay down their weapons to compete, the Global Security Games (GSGs) have become known as a place where they can peacefully pick them up. Participating nations seek to cooperate together to make the world a safer place. To see great examples of practical measurable human performance enhancement, just look at the records being broken every two years by teams and individuals at the GSGs.
When the U.S. Army teamed up with G25 countries to create the first public GSGs that included international embedded media coverage in 2012, a new media franchise emerged, one that now rivals organized sports in some countries. Today, the most common distinction in the average military tour of duty results not from real-world conflict, but from honors given for besting previous GSG performance records.
The transition hasn't always been easy. There came a point in the early twenty tens when military brass realized that several of these environments were no longer simply entertainment, but could actually be used to train a new generation of real-world terrorists. We reached an important turning point when a few lone terror attacks replicated what the players had learned in virtual space, in much the same way as real crime periodically emulates motion picture scripts.
By 2005, states were beginning to regulate the sale of graphic and sexual games to minors, and by 2012 the U.S. government was further regulating the level of detail in terrorist simulations worldwide. Today high levels of authentication and transparency are required by all authorized players in high-realism environments. The highest level combat sims are appropriately reserved for those with proper clearances. This has been a critical adjustment necessary to allow the metaverse (simulation space) to continue its accelerating development.
Logistics and soldiering have been greatly improved by the quality of worldwide contestants in the GSGs, both real and simulated. Setting up the rules of each exercise is always politically tedious but worth the effort, as the surprises that occur are designed to simulate anticipated terror events to the greatest degree possible.
We've already mentioned developmental physics as the biggest paradigm change that has happened on the analytical and decision theory side of military logistics. Though they are far from perfect, today's OR/MS simulations, informed by developmental physics, are increasingly able to show us how to steer technological development toward measurable increases in STEM compression, intelligence, interdependence, and immunity. But there are a number of other innovations worth noting as well. Here are two that have been particularly useful in recent years.
As mentioned earlier, the valuecosm is a rich network of recorded preferences that have been personality captured by generic and DT avatars from all participating citizens around the world. It is simply all the data and systems we use to chart our preference landscapes for all the goals and choices we publicly and privately share about ourselves. It profoundly influences the way we make logistics choices today.
The following example should be illustrative. Say you are responsible for a fleet of telerobotic vehicles. You propose a change in one of your maintenance systems, involving buying less of widget X, and more of service Y, a labor-intensive but high quality process. Who exactly does this affect, and how much do they care about it?
The datacosm has a history, and that helps you understand your supplier options, find others who have made similar decisions, and begin to explore the public consequences. But it is the emerging valuecosm that can give you a rough sense of who all the stakeholders are likely to be, who might react negatively to the proposed change, and who would advocate it.
You discover, predictably, that X's main supplier will be unhappy. How unhappy? The valuecosm can crudely estimate that, help you understand how much their business will be hurt, and what concessions they might be willing to offer in return for maintaining the business. Negotiations become orders of magnitude more efficient, as first-approximation prices are attached to all the things you care about. Deals are proposed to you constantly, but only of the type you might find actionable, based on your preferences.
With the valuecosm, you can continually readjust your proposed future contracts, public or private, to align with those companies whose public performance values (low price, Mil-spec quality ratings, rapid service, payment flexibility, location, reputation) are most in line with your own declared values and abilities.
If any of your desired values, product or service preferences seem out of line with the market, you can change them, or leave them in place and try to create a new market, publicly proclaiming your desire for a level of performance which doesn't yet exist.
You may decide for political purposes not to publicly identify the planned change at all, or identify it only to a special group. The valuecosm has many levels of authentication and privacy for sensitive communications. But where possible, the valuecosm pushes people to have open communications, as the collective value that can be created is so much greater.
We are all free to use whatever form of communication we want, but everyone increasingly knows it is a well-trained DT who proposes the most useful solutions to our problem. Each of us has to regularly teach our avatar whether he or she is misvaluing any of our choices (based on our feedback to its constant suggestions), but many of us do this because it makes us far more effective to have a good DT than a sloppy one.
In conflicts, the valuecosm is learning how to indicate all the allies aligned with your values, as well as all the potential "adversaries" who are opposed and who might try to stop or delay your move, if legally possible. Even today's generic and DT avatars handle many initial negotiations, presenting both parties with a range of possible solutions, and the current valuations each side expresses for each solution.
What this system is learning to do, the better it becomes, is to highlight all the social actors who are the most ideal collaborators, worldwide, and to rapidly improve the potential for global collaboration as soon as any transaction is publicly disclosed. The data involved is immense, and the communications worldwide, but avatars filter most of it, so the complexity seems less, on average, than what our parents had to endure. At the same time, career work has become a great deal more sophisticated "under the hood" of our surface-level social interaction.
Most people today feel that there is little danger in this kind of machine intelligence replacing human beings, as these are technologies designed to amplify and more accurately model human preferences, not supplant them. Such systems can only get better through the continual training by humans over time. After our individual creativity, engaging collectively within the valuecosm might even be the most important role that each of us plays in the modern world.
Do you remember the "balanced scorecard", that deceptively simple but useful innovation in big business management that emerged back at the turn of the century? (see Robert S. Kaplan and David P. Norton, The Balanced Scorecard).
It was one of several useful strategies for converting large, hierarchical, sclerotic organizations into networks of bottom-up management, periodically measured by a simple, clear set of standards, or scorecard. The key has been developing the correct framework, in particular, an evolutionary developmental framework, to determine the correct balance of factors for leadership to measure, incentivize, and inspire stakeholders to manage toward. As scorecard advocate Ben Plumb said, in the industrial age, when success rested on the productive employment of tangible assets, direction was top-down. In the information age, when tangible assets are increasingly commodities and success rests more on the employment of intangible assets (employee knowledge, IT systems, customer relationships, etc.), the entire organization has to be involved in implementing strategy. The network is the company, and daily work in all the smallest nodes drives it. In the most nimble orgs the top sets the standards and then stays largely out of the way, other than pruning and rewarding those creating some of the results.
While it seems mundane, this strategy, empowering individual employees and lower management to use balanced scorecards and other digital dashboards to act like leaders themselves, is one of only a handful that have reliably instituted horizontal change in large organizations, which as we know are generally pretty immune to major change. It has taken three decades for network-centric models like the balanced scorecard and its many digital descendants to filter in to the military, which is and must always remain the last bastion of hierarchical control, but even here today's recruits self-determine a large number of the problems they want to solve, and are measured and rewarded based on their progress toward many of their self-determined goals.
I believe the military as an organization is a lot more productive for it. Hierarchy is invoked whenever there are crises, but crises are more episodic and localized today, so there is a lot more bottom-up control on a day-to-day basis, as amazing as it sounds. In distributed computing, as in human society, the smarter the individual "computer" becomes, the more that individual needs to self-determine what kind of performance he or she wants to enhance, to maximize the power of the whole.
The secret we all know is that logistics never was primarily about optimizing resources, as much as we'd like to think it was. In an entropic universe, there must always be a tremendous amount of waste in every system, and logistics simulations minimize waste less than you might expect, as they always offer a new and more complex set of options at the same time.
What logistics optimizes most is perceived value to the stakeholders, and today, there are a lot more publicly identified and value-mapped stakeholders, as well as a lot more folks doing the perceiving, so our perceived capacity, quality, responsiveness, safety, economy, and other values are achieved much more effectively than ever before, as seen from the widest number of viewpoints within the system.
This new environment makes the logisticians' job more productive, by collective self-assessment, than ever before. It's a great time to be a soldier.
Thanks to Ben Plumb and Jose Cordeiro for helpful feedback.
Clark Aldrich, Simulations and the Future of Learning, 2004.
AmericasArmy.com. Home of the highly successful U.S. Army simulation game.
Jan Amkreutz, Digital Spirit, 2003. A good introduction to the idea of the digital twin.
Thomas Barnett, The Pentagon's New Map, 2004. Succinct explanation of the post-cold war military paradigm: shrinking the disconnected gap.
Peter Bentley, On Growth, Form, and Computers, 2003. A top book on digital biology (biologically-inspired computing).
Robert Coram, Boyd, 2002.
Mihalyi Csikszentmihalyi, Flow, 1991.
Hernando de Soto, The Mystery of Capital, 2003.
Ron Ingelhart, The Silent Revolution, 1977. See also his more recent books. Extensive surveys showing how technological development creates a less ideological, more social and personal development oriented populace.
Robert S. Kaplan and David P. Norton, The Balanced Scorecard: Translating Strategy into Action,1996. The classic intro text. Also see www.bscol.com, run by Balanced Scorecard Collaborative, the vehicle Kaplan & Norton formed to spread this technology. See also The Strategy-Focused Organization: How Balanced Scorecard Companies Thrive in the New Business Environment, 2001. A broad expansion of the original theory, along with valuable case histories.
Robert Klitgaard, Controlling Corruption, 1991. Insightful text by the dean of the RAND Graduate School.
Don Lee, "China Fears a Baby Bust," Los Angeles Times, December 6, 2004.
Christopher Manning, Foundations of Statistical Natural Language Processing, 1999.
Paul Niven, Balanced Scorecard Step-by-Step: Maximizing Performance and Maintaning Results, 2002. How-to guide by a consultant who does this for a living. Read this only if you’re going to try this on your own organization.
Mihail Roco and William Sims Bainbridge, Eds, Converging Technologies for Improving Human Performance: Nanotechnology, Biotechnology, Information Technology, and Cognitive Science, National Science Foundation, June 2002
Everett Rogers, Diffusion of Innovations, 2003. Classic text on sociocultural, institutional, and environmental context of innovation.
John Smart, "The Conversational Interface and the Symbiotic Age", 2003 and "Promontory Point Revisited," 2003. Two speculative articles on the CI.
Bruce Sterling, Global Business Network Interview with Bruce Sterling: Questions for IBM for the US Army Logistics Transformation Agency (LTA) on the Future of Technology, December 1, 2004.
Leo Tolstoy, Anna Karenina, 2004. Classic fiction on lives lived for self and others, and the consequences of life choices.
Selection and Social Theory, 2002. One
of the fathers of sociobiology, and explorer of the concepts of reciprocal
and general altruism in social game theory.