Challenge Questions below, color-coded for Science,
Technology, Business, and Humanism
(personal, social, political) themes in accelerating change, will
be placed one per table on the 20 to 30 tables for our Saturday
evening Collective Intelligence dinner. They'll be identified
by color-coded theme and by number, for those interested in pre-identifying
questions they would particularly enjoy discussing. Seating is
dinner, during dessert, we will hear reports from self-selected
group leaders from various tables on the result of discussion
concerning a range of difficult, interesting questions. We'll
cover about eight questions (two per theme) in the main room after
dinner, take a brief break, and then those wishing to continue
with questions will move to a breakout room, while others will
watch an evening video presentation and teleconference with Howard
you expect to discuss or be a group leader in discussing a particular
question set below, feel free to do a little advance preparation
(always appreciated, never expected). Be aware that none of these
questions are easily answered, and we hope you'll enjoy discussing
them with other insightful, inquisitive individuals in a give-and-take
of idea exchange.
note: These question sets are rough guidelines
for the discussion. You are not expected to have data or analyses
at hand, only to share your intuitions. The most interesting part
of the discussion may not be the answers you come up with, but
the ideas and opinions you share
with your table, and your group leader shares with the audience.
Attributions of opinions to particular individuals is encouraged,
but entirely optional.
If your table finds your question unclear or unappealing, feel
free to pick any from the list below, or to invent
your own. You might begin with the question you are given, but
feel free to go wherever conversation and group interest takes
you. You may end up exploring and reporting on a small portion
of the question, a different question, or a question of your own
us if you have edits or additional questions to propose.
What is the difference between development and evolution? Which
parts of the universe, and of our local environment, might be
following a developmental plan and which are chaotically or randomly
evolving? What parts of a developing biological organism undergo
random variation during the unfolding of its developmental plan?
What tests might allow us to tell the difference?
In what sense is natural selection an incomplete description of
complex systems and of accelerating systems? What are "self-selection,"
"self-organization," "convergence," and "emergence,"
and in what way might such processes constrain our current and
Is there any scientific evidence for accelerating computation
in the history of life on Earth? If so, how do we measure it?
Could it be an illusion, an "observer-selection bias"?
How do we quantify computation in technological systems? In nontechnological
systems? What is the significance of Sagan's "Cosmic Calendar"
of accelerating emergences in universal history?
What is the difference between such terms as data, knowledge,
information, meaning, and wisdom? Can we expect any near-term
improvements to our current theories of information and computation?
What new theorems might an "Einstein of Information Theory"
help us to understand about the world?
Is accelerating computation happening on Earth in special systems
over time? If so, is this likely to be a ubiquitous feature of
universal development, or might Earth be unique? If Earth isn't
physically unique, does it seem reasonable that the universe is
tuned in its initial parameters for many endpoints of high-level
universal simulation? Is that somehow valuable to the universe?
Does the historical developmental path of Earth's intelligence
suggest we are heading for "inner space" or "outer
space"? Will we likely go out to colonize the stars, or in
to ever smaller microdomains? Perhaps both? Is the future of human
intelligence constrained by either the large-scale structure of
spacetime in cosmology, or the small-scale structure of physical
Is accelerating change a fractal (scale-free) process, applicable
at all known universal scales? Do accelerating systems always
become more localized, miniaturized, and resource efficient over
time, or is this just a feature of recent history in digital computers?
What physical systems, if any, have seen no accelerating change
during their development?
What are possible candidates for emergent properties, constraints,
laws, or meta-laws of accelerating change? Are measures of technological
autonomy, immunity, and interdependence also accelerating? If
so, how long can we expect acceleration to continue? Do any properties
operate with increasing power the more computationally complex
the system in question?
What are mathematical, cosmological, technological, and computational
singularities? Which of these can be studied scientifically? How
do the multiplicity of singularity models interrelate? Can we
use systems theory or complexity studies to interrelate them?
What other exponential and asymptotic domains exist in physical
What technologies are most helpful in accelerating positive personal,
economic, social, and political change? What technologies should
we and do we regulate and slow down, due to their negative or potentially
negative effects on the human environment? How do we best assess
whether a technology is worth promoting or regulating?
Is modern humanity better characterized as selective catalysts
or as controllers of technological development? What are the implications
of either? What kind of programs and institutions do we need to
improve technology innovation, diffusion, assessment, and policy?
Within 20 years we can expect our most complex computers and robotic
systems to start exhibiting both intelligence and personalities.
How will we know which personalities to trust, and which might
become "sociopathic," like HAL in 2001? As today's still-stupid
computers and robots steadily increase in sophistication, what
are our control options for "safe learning agents"?
What will be the future roles for "bottom up" (evolutionary
or developmental) vs. "top down" (rationally designed,
human-built) creative strategies in artificial intelligence and
nanotechnology in coming decades? Which path to electronic brainmaking
is more useful: deliberate design, "reverse engineering,"
or evolutionary computation? Perhaps a different paradigm?
Human history has had particularly violent and selfish phases
in our evolutionary development. Must computers in a bottom-up
design paradigm follow a similar course in their developmental
psychology? If so, what risks may this pose? If not, why not?
What are the safety issues for bottom up vs. top down A.I. design?
Is one more dangerous than the other?
The rate of technological change is causing major stress in first
world countries, and it's only going to get faster. Some civilizations
(Egyptians, Mayans, Romans, etc.) had catastrophic collapses under
special stresses. Could that happen today? To all first world
societies or just a few? Would collapse of one society stimulate
immune responses in the others?
If we hit a Moore's law (price performance doubling every 18-24
months) limit circa 2015 due to miniaturization limits in IC's,
or a cost explosion somewhat before or after this same date due
to Moore's second law (doubling in cost of chip fabrication plants
every four years), what will likely happen next in the technological
world? Will new paradigms get us around these limits?
If chips eventually become unshrinkable commodities in coming
years, will we finally see the emergence of massively parallel,
multi-chip associational architectures (impractical to build today,
due to ongoing vertical miniaturization)? Would a new era of "horizontal
miniaturization" keep us on Moore's general performance curve?
Would it bring more biologically-inspired chip design?
What should be our near-term intelligence amplification (I.A.)
and artificial/autonomous intelligence (A.I.) political, social,
and personal priorities? What roles will technology innovation,
diffusion, assessment, and policy (IDAP) play in the near-term?
What current investment opportunities, sectors, and strategies
are most likely to keep us on a curve of accelerating productivity
in business and economic indicators for the next five to ten years?
How can we buffer the transfer of technological acceleration to
market capitalization in our increasingly innovative markets?
How do we sort hype in a world of information overload?
What are the critical factors for increasing the growth of innovation,
technology diffusion, and business intelligence in our current computation-rich,
simulation-rich, bandwidth-poor environment? How can we break down
the "last mile problem" in bandwidth access, and how critical
is this vs. other technical issues to improving national productivity
in the coming decade?
Are we now at the beginning of an IT Globalization Revolution
(where first world countries will outsource much or most of their
IT work to the third world), similar to the Manufacturing Globalization
Revolution of the 1980's (where first world countries outsourced
much or most of their manufacturing to the third world)? If so,
how should we prepare for this new shift?
How do we account for the growing value of intangibles in an increasingly
information or knowledge-based Exponential Economy? Are labor
productivity and intellectual property value both on double exponential
growth curves? Which is growing more quickly? What specific political,
economic, cultural, and technological advances are driving productivity
Is technological acceleration increasing turbulence in the economic
sector? Is it taking away the momentum of scale, and forcing companies
to focus on strategies of resilience, innovation, and continual
renewal? What can or should be done to protect the U.S. and other
economies from increasing turbulence, recessions, and increasing
investment losses in coming years?
How do we maximize the spread of individual and corporate wealth
in a sustainable and culturally appropriate manner in our global
economy? How do we promote the "triple bottom line"
(society, economy, environment) of industrial ecology? How rapidly
can we bring competitive markets, democracies, and liberal traditions
to cultures with no history in these areas?
To what extent do we need to promote third world development,
and to what extent first world development, in coming decades?
Which current or horizon technologies are now and might be particularly
effective in third world development? How do we mitigate increasing
nationalism and tribalism in the face of spreading democracy,
liberty, and competitive markets?
How do we increase technology assessment and create better regulated,
but still business-friendly technology policy? What regulatory
or deregulatory processes, if any, will help us regain productivity?
Was the bubble and crash of the new economy a "healthy"
process? Will we see more severe versions of this in coming years?
How do we best prevent or adjust for the accelerating dislocation
of workers in an environment of accelerating technological change?
If we minimize job guarantees, allowing creative destruction in
the business sector, what kinds retraining programs and other
resources should our social net provide? How do we improve management-labor
What are the critical factors facing humanity in the next generation
(25 years)? In the next century? Is science and technology following
a different, continually accelerating curve by comparison to change
in the environment and in human culture? If not, why not? If so,
what does a steeper slope of the sci-tech curve mean for our near
and long term future?
What do you expect to be the greatest positive development in
the next 25 and 100 years? What do you fear may or will be the
greatest negative development in these same time frames? What
political, social or personal factors might deeply affect the
shape of the long-term future?
What will be the roles of global governance, ethics, and planetary
consciousness and sustainability in coming decades? How distributed
or centralized will our solutions be? How can we be successful
in conflict resolution, closing the rich-poor divide, and accelerating
compassion in coming years? What problems should be our immediate
vs. intermediate-term priorities?
What should be our ethics, issues, and priorities as we contemplate
a future of inexorable accelerating technological change and a
present that continues to manifest great social divisions, war
and global arms trade, famine, disease, illiteracy, and injustice?
How do we best mitigate our remaining global divides, which are
closing the fastest, and why?
The potential for technological destruction is steadily rising,
from guns, to car bombs, to "dirty nukes," to bioterrorism
that might kill tens of thousands, possibly even millions of us.
Is there a long-term solution to this problem? Does it require
full technological transparency (e.g., David Brin's Transparent
Society, 1998) as a solution? If so, what cost to human
liberty will be involved?
How can we best use social and technological tools to amplify
our biological abilities, so-called "intelligence amplification"
(I.A.), versus to create artificial intelligence (A.I.) in the
next few decades? What kinds of educational, communication, and
other "I.A." approaches will allow us to increase our
individual creativity, social stability, and our species' "swarm
To what extent is our emerging internet and communications grid,
by increasing the quality and quantity of information flow between
individual humans, creating a "planetary consciousness"
or "global brain"? To what extent is the Lovelock-Margulis
"Gaia" analogy applicable? How will our emerging "global-self"
interrelate with our fluid and diversifying individual consciousnesses?
What features of biological systems are still missing from technological
systems? What new physical capacities in technological systems
are missing from biological systems? Are these two realms converging
or remaining separate? Will technological intelligence eventually
become a superset of biological experience? What might we expect
from an "electronic consciousness?"
Might biological "uploading" into a technological substrate
be possible or desirable in the future? Would self-aware computers,
symbiotically connected to us want to upload us? Humans and machines
are currently becoming increasingly interconnected. Must this
trend accelerate if machines develop autonomy? Are machines becoming
Some philosophers propose that humans are creatures with finite
computational abilities attempting to understand the infinite,
by scientific, ethical, aesthetic, spiritual and other pathways,
and that this finite/infinite contrast will endure indefinitely.
If conscious computers are also finite state machines, and just
as compelled to seek universal understanding, does this mean they
will be just as scientific, ethical, artistic, and spiritual as
us? Possibly more so?