Future of Tech
Author: AMR AHMED SHAWKY, SHARAF JOUDY MOHAMMED, IBRAHIM SOHAILA MOSTAFA

81 YEARS Ago  .......

One day, ME“ AHMED SHAWKY“ and my friends Judy, Youssef, Saif, Maryam and Suhaila, decided that at six in the morning we went to flee to a safe place in the countryside to search for a quiet and safe life and to search for shelter, then we saw a small village and there were farmers growing  „ wheat and  Rice ...and sugar cane in the agricultural lands, and they had a small cart to transport the crops, and there were pastures for the cows, and we saw in a house built of bricks, some women making cheese themselves from cow’s milk, molasses from sugar cane, and some Egyptian pie.  We decided to eat pie, honey and cheese, and the food tasted great. During the meal, we got to know a little dog, then we decided that he would be a friend to all of us. My friends and I came to talk about the future of the planet and technology, then Judy talked about ..... Joudy : The worst thing is that the disease came to occupy the earth as if it were a monster came to kill many people . In 14/2/2019, covid 19 destroyed the earth. The earth had already died ,it was a disaster.

   And in the middle of the hadith..

Seif: OMgggg!!

(All in one sound) : What is wrong seif?

Seif :LOOK!!LOOK!!CANT YOU SEE …… THERE IS SOMETHING LIKE A GHOST THAT APPEARS AND DISAPPEARS…..

(All in one sound ): Are you okay seif????

Seif : there is something like a Desert !!

(All in one sound ): YES, We see it!!!

 

Saif said, “What is it that I see?  temporal let us enter it and it is possible to see him in the future let's go, they all entered the temporal gate and Ahmed said where are we and what is this and you all said  Friends, we are here, and he said, “Where is the dog?” Then he looked beside him, and saw the dog turning into a robot dog, and he saw the world full of technology, and the buildings were technological and the factories of products did not interfere with any human being to buy.  To huge buildings that have cultivation and control through robots, and medicine was at its best, because robots helped humans in surgeries and high-precision and dangerous operations and made them easier, and all this thanks to the scientific effort that scientists put into robots...... 

Saif :  yells at his friends

You notice something

"When we start talking about the future or said word “ THE FUTURE” , THE TIME  changes without us feeling this change at an extraordinary speed, and then we move from one time transfer to another, sometimes back to the past."

 

{[ happened time jump<<

 

2050

Joudy: Look at those schools, how beautiful they are and how much developed technology in schools now . There are no teachers ,There are robots instead, They teach the students. (اي ر واحد) is a teacher at school. He told us to come with him to have lunch in his house. The thing that we discovered, is that the robots are eating, but they eat paper that provides them with information .

Joudy: We travel to Mars now, as they used to say?

 Said : Yes, we travel easily. Anyone can travel.( اى ار واحد)

 

Sohaila : I like to discover life on Mars , i love the future

{{ Time jump {{{{

2070

 so I went by elevator from Earth to Mars . The elevator inside was great and took me in 15 minutes . when I went I found a people  living a completely different life from the people on Earth . For example I have seen people wear something on their back and fly with it from place to another . But there is something strange when I went to Mars I found myself a year younger than my age. So I read many books I knew that the rotation of the Earth around the sun is 365 days but the rotation of Mars around the sun is 687 days . The difference between the two planets is about one year .Because of the time between the two planets , I wanted to go back to Earth again and I came back through the same elevator . In my opinion , you go to Mars and enjoy the adventure there . On Mars life is completely different and some people prefer it .

Mariam said : this awesome future

{{ time jump{{{

2090

Mariam = Food unfortunately will be reduced due to climate changes. We must invent tablets in order not to harm our diet. We create a tablet that tastes the same as food that has become extinct, such as rice and wheat. Public health experts believe that health and beauty are based primarily on the rules of proper nutrition, as The work and harmony of the body's organs depends on the balance of the basic elements received through the ingested food. It is known that food consists mainly of the following basic food groups:

protein materials

Fatty substances

sugary substances

Mineral elements and vitamins

water

Although most foods contain various nutrients, at the same time we do not find a single food that can contain all the nutrients, for this reason nutrition and public health experts advise the need to eat different types of food without being limited to one type only, Thus we can innovate tablets that are the same foods we eat every day.

Yosief said this foods of the future

{{ TIME jumbe {{{

2100

 Youssef found some traditional books and references form 2016-  to2019 that talk about the future of technology and some electronic links from an old channel called "YouTub”

Explanation of the following:".

Modern technology has enabled monitoring of large populations of live cells over extended time periods in experimental settings. Live imaging of cell populations growing in a culture vessel has been widely adopted in biomedical experiments. Such experiments create large numbers of time-lapse images, and the information embedded in these images holds tremendous promise for scientific discovery. The growing amount of such data calls for automated or semi-automated computer vision tools to detect and analyze the dynamic cellular processes, and to extract the information necessary for biomedical study.

Live cells are often of low contrast with little natural pigmentation; therefore, they usually need to be stained or fixed in order to be visible under bright field microscopy or fluorescence microscopy. However, fixing or staining may destroy the cells or introduce artifacts. Label-free imaging, in particular phase contrast microscopy, is highly desirable for live cell imaging because it allows cells to be examined in their natural state, and therefore enables live cell imaging over extended time periods. We focus in this chapter on analyzing time-lapse images from phase contrast microscopy.

While there is benefit in studying cellular behavior at the single cell level, we focus in this chapter on computer vision analysis of cellular dynamics in large cell populations. In particular, we are interested in data where the cell populations are dense, and there is considerable touching and occlusion among neighboring cells.

In studying cell populations in time-lapse imaging, manual analysis is not only tedious, the results from different people, and even the results from the same person, can also differ considerably. Moreover, it becomes impractical to manually analyze the behavior of each individual cell in a large population throughout an experiment over an extended time span. Much of the current practice in biomedical study is to manually examine a small subset of the entire cell population, and research conclusions are often drawn based on such manual analysis of a subset of the data. This is suboptimal; it could be misguided when the subset is not representative of the entire cell population. Therefore, automating the process for analyzing each individual cell and its behavior in large populations in time lapse label-free images is not only interesting and challenging computer vision research, it also has the potential of transforming how studies concerning cell population dynamics are done, and what they may discover.

INTENT

Modern technologies support new forms for an organization to present its services and products. In addition, they allow an organization to open up new sales and service channels to better support both customers and suppliers.

 

PROBLEM

We want to use modern technologies, including the Internet, to realize open systems that integrate various applications and legacy systems within a uniform development and architectural concept.

 

We need a design concept that encapsulates business logic into units independent of interaction mechanisms or front ends, making them available for different channels, technologies, and workplace types.

 

SOLUTION

We integrate different sales channels and workplace types. We use domain service providers to offer bundled services and allow both users and customers and suppliers to easily handle related products and services.

 

By our definition, a domain service provider is a domain-specific conceptual unit within a large distributed application system. A domain service provider represents business logic in a way that encapsulates the reproducible and interrelated interactions of an application context with the pertaining materials.

 

Domain service providers are addressed by application components or other domain service providers and respond by supplying their services. To provide a service, they use the materials they manage.

 

Domain service providers are implemented so that the specific way these services are rendered and which interaction model is used to present the results at a user interface remains open.

 

If domain service providers are not oriented to a specific presentation and handling or interface technology, then a number of different sales channels could package the services and present them differently, depending on the customer type and technology. Different service providers could each support various workplace types and other services.

 

BACKGROUND: DOMAIN SERVICE PROVIDERS

Modern technologies have had a major impact on how organizations handle their business as they open up different sales channels to reach customers. Many companies offer their products online, and electronic commerce is regaining its pace, especially in the B2B (business to business) sector. A major change can be observed, particularly in the service industry, where companies expand their activities by adding services on demand, call centers, or mobile field services. Currently, many of these service forms are still found in isolation, somewhat blocking integrated services at customer sites, because the underlying applications are primarily linked on a data level rather than over conceptually higher business transactions. For example, many database applications are based on isolated data for accounts, deposits, contracts, and other data pertaining in some way to a customer, but a general concept customer with links to all related entities is missing.

 

The Internet allows people to compare products and services directly. Potential customers can obtain information about prices and services of different vendors quickly from their homes. The Internet also allows competing vendors, including vendors from different industries, to penetrate the core businesses of organizations. This means that these organizations feel a need to be present in this medium. The reason is the increasing trend that open markets and new technologies dissolve narrow industry boundaries.

 

For example, many insurance companies have started offering bank services, while banks have expanded into the insurance business, and do-it-yourself companies offer travel packages.

 

Though this change challenges many traditional organizations, it offers a chance to bundle different services in a few concentrated sales or service points.

 

The downside is that most currently deployed application systems are too limited to support these new business trends. In addition, many host-based applications have reached the point where they can no longer be maintained or upgraded. While many workplace applications have been replaced by other technologies or implementations, we can see the same happening in an attempt to support new sales channels.

 

Unfortunately, there are not many integrated applications to support these organizations in their efforts to grab these business opportunities. Domain functionality is developed in many and different ways. If you find some degree of domain-specific integration, it is mostly found on the level of the host database and data-exchange.

 

Multiple implementations of application logic is the central problem that new technologies must overcome. This problem is further aggravated by a technical problem, because most new applications are implemented as client-server systems. These systems are mostly structured on the basis of the so-called three-layer architecture . This architecture suggests an integration of the functions or applications involved on the data level. In summary, there is a lack of domain-specific integration.

 

Monitoring and Supervisory Control

Modern technology allows machines to become more and more complex, and manual-control tasks can mostly be automated. Modern airplanes are popular examples of this. There is a certain irony in such automation which should not be overlooked. On the one hand automation serves to avoid human errors and to reduce mental load, but on the other hand there are emergency situations in which the system changes from automatic to manual control. The irony is that a human operator who is deprived of the daily experience of controlling the machine in normal situations is required to master it in the difficult emergency cases. In the normal situation, the task of the operator in highly advanced transport and production systems becomes that of a supervisor. The task of the supervisor is no longer continuous control of the system output, but rather the monitoring of the system state, the discovery and diagnosis of abnormal states, and the taking of appropriate actions when abnormal states are discovered.

 

Monitoring can be a boring task: irregular system states are rare events, so there is little that happens. The classical problem which gave rise to the systematic study of maintained attention to displays which only rarely indicate critical events is that of the radar controller. In the course of a watch there is typically a fairly rapid decline of the probability of detecting a critical event, the so-called vigilance decrement. The same phenomenon can become a problem in industrial inspection tasks. In particular, the really rare events carry a risk of being overlooked. One of the factors which are likely to contribute to the vigilance decrement is temporary inactivation of the appropriate task set.Monitoring can also be a task which places too high a burden on a human operator. There can be hundreds of state variables which have to be monitored, so the decision as to which state variable to inspect next can be difficult. In their inspection patterns operators adjust to the characteristics of the state variables. For example, state variables which change rapidly tend to be inspected more frequently than state variables which change only slowly. In addition, interdependences between state variables guide the inspection pattern. However, in the case of a failure the normal characteristics of the state variables may no longer be present, so that inspection patterns can be misguided.

 

The detection of abnormal system states can take minutes or hours, depending on a number of conditions. Among them is the rate of change of system states, which can be slow. Again, as does the pattern of inspection behavior

Modern technology exploits and controls organic materials with a precision that was inconceivable only few decades ago.[1] This is a consequence of the fact that the performance and property of a material can be related to the structure and behaviour of the constituting molecules. For instance, considering the topic of the present book, the photostability of polymers, sunscreens, drugs, paints, etc. depends on the ability to waste, at the molecular level, the absorbed photon energy either via non-radiative (e.g. internal conversion) or radiative (e.g. fosforescence) channels. In general, the more control we have on molecular properties, the more a material will fit our needs and, for this reason, in the past chemists have learned to synthesize, for instance, photostable and luminescent molecules.

 

A novel research target in the area of the control of molecular properties is represented by the design of molecular machines: molecules that react to a certain external signal by displacing, usually reversibly, one or more of their parts. Modern chemistry and technology are rapidly moving along this way: molecular devices and machines are, nowadays, under investigation, giving rise to the so-called molecular technology, or else termed nanotechnology.[1] Between other molecular devices, those based on reversible photochemical reactions have a great interest. These devices are operated irradiating the molecule at the wavelength required to trigger the photochemical process. In principle, the design of the right reagent, allows for a direct control of the reaction rate, efficiency and photostability even when the interconversion between the two (or more) “states” of the device need to be repeated over a large number of cycles. It is apparent that the elucidation of the factors controlling photochemical reactions is imperative for the rational design of such a material. In particular, it is apparent that, in order to achieve this goal, it is mandatory to elucidate the details of the photochemical reaction mechanism. Among others, this requirement provides a timely and solid motivation for the topic developed in the present chapter.

In recent years computational chemistry has gained increasing consideration as a valid tool for the detailed investigation of photochemical reaction mechanisms. Below we will outline the strategy and operational approach to the practical computational investigation of reaction mechanisms in organic photochemistry. The aim is to show how this task can be achieved through high-level ab initio quantum chemical computations and ad hoc optimization tools, using either real or model (i.e. simplified) systems. Another purpose of the present Chapter is to show that, nowadays, a computational chemist can adapt his/her “instruments” (the method, the approach and the level of accuracy) to the problem under investigation, as every other scientist does when there is a problem to study and a methodology to be chosen. In particular, different and often complementary computational tools may be used as “virtual spectrometers” to characterize the molecular reactivity of a given chromophore.

 

The general approach used to follow the course of the photochemical reaction involves the construction and characterization of the so called “photochemical reaction path”. This is a minimum energy paths MEP[6] starting at the reactant structure and developing along the potential energy surfaces (PES) of the photochemically relevant states. Such interstate path usually originates at the Franck Condon (FC) point on the spectroscopic state and ends at the ground state photoproduct valley. Such an approach has been named the photochemical reaction path (see also Chapter 1) or, more briefly, pathway approach[7, 8]. Within this approach one pays attention to local properties of the potential energy surfaces such as slopes, minima, saddle points, barriers and crossings between states. The information accessible with this method is structural: i.e. the calculated path describes, strictly, the motion of a vibrationally cold molecule moving with infinitesimal momentum. While the path does not represent any “real” trajectory, it allows for a rationalization of different experimental data such as the excite-state lifetimes, the nature of the photoproducts and, more qualitatively, the quantum yields and transient absorption and emission spectra. As we will see in Section 2 this approach can be related to the common way of describing photochemical processes with the motion of the centre of a wave packet along the potential energy surfaces. [9] Notice also that the analysis of the photochemical reaction path is currently receiving new attention as a consequence of the recent advances in femtosecond spectroscopy and ultrafast techniques. 

In most past work, the energy surface structural features and, ultimately, the entire reaction path have been computed by determining the molecular wavefunction with state-of-the-art ab initio methods. In particular, a combined ab initio CASPT2[10, 11]//CASSCF[12–15] methodology has been extensively used since it has been proven to reproduce data with nearly experimental accuracy.[8, 16] This approach will be described in detail in Sections 3 together with a number of commonly used potential energy surface mapping tools. The operational procedure for approaching a photochemical problem will then be described and discussed in Section 4. The applications of such a procedure to the intriguing problems of determining the mechanism of the photoinduced cis-trans isomerization of a retinal protonated Schiff base (RPSBs) model and of azobenzene (Ab) will be discussed in Sections 5 and 6 respectively. Both these chromophores have an extended conjugated π-system and are characterized by ultrafast and efficient cis-trans isomerizations taking place upon photoexcitation. Thus, these systems can potentially be employed in nanotechnology for the design and construction of molecular devices such as random access memories, photon counters, picosecond photo detectors, neural-type logic gates, optical computing, light-switchable receptors and sensors, light addressable memories and molecular motors just to mention a few.[2, 17, 18] Finally, the complex network of reaction paths underlying the photochemical reactivity of cyclooctatetraene (taken as a representative of cyclic conjugated hydrocarbons) will be discussed in Section 7 to illustrate both general and subtle aspects of photochemical organic reaction mechanisms.

 

Consumer Threats

Consumers facing threats from modern technology is nothing new. Although not entirely the same, consumers have faced similar threats from using personal computers, cellular phones, and the current electrical grid. As the dependency on technology grows, so does the dependency on electricity to power the technology. When you take away a single piece of technology from a consumer, such as a laptop or cell phone, the consumer might get angry but will easily adapt. Consumers can easily replace cell phones, but when access to electricity is taken away, consumers would find themselves without access to all of the technology they had grown reliant on. As such, smart grid threats will impact consumers in a variety of different ways ranging from privacy to emergency life support situations.

 

Abstract

With the advance of modern technology, big sensing data is commonly encountered in every aspect of industrial activity, scientific research and people's lives. In order to process that big sensing data with the computational power of the cloud effectively, different compression strategies have been proposed including data trend-based approaches and linear regression-based approaches. However, in lots of real-world applications, the incoming big sensing data can be extremely bumpy and discrete. Thus, at the big data processing steps of data collection and data preparation, the above compression techniques may lose effect in terms of scalability and compression due to the inner constraints of their predicting models. To improve the effectiveness and efficiency for processing those real-world big sensing data, in this chapter, a novel nonlinear regression prediction model is introduced. The related details, including regression design, least squares, and triangular transform, are also discussed. To explore fully the power and resource offered by the cloud, the proposed nonlinear compression is implemented with MapReduce for achieving scalability. Through our experiment based on real-world earthquake big sensing data, we demonstrate that the compression based on the proposed nonlinear regression model can achieve significant storage and time performance gains compared to previous compression models when processing similar big sensing data on the cloud.

The insidious Role of Device and communication technologies

Three important influences of modern technologies become evident. First, the storage of personal preferences in handheld devices makes them ready for use at anytime and at any place. Second, the network access via the wireless cell phones and devices make it feasible to reach anyone at anytime and at anyplace. Third, the Internet access provides a lookup for the type of POC3 functions on the RC3 emotional objects of the human self! Thus, the reaction time is reduced and Internet dialog is as feasible as “social interaction meeting.”

 

The features of this technologically sound and robust dialogs alter the very fabric and flavor of social interactions. Figure 14.4 depicts the impact of the Android, the networks, and the Internet technologies on the emotional and reasonable content of human relations.

 

In using POC3 approach to deal with RC3 factors in human emotions, the use of technology plays an underlying role. Pictures on Androids can show concern; movie of events can soothe misunderstandings; face to face dialog (even if it is over a handheld device) can heal emotionally broken ties, etc. In a sense, technology-assisted social relations can be superior provided the communication media does not distort or inject socially harmful content. There is good reason to guess that good relations can only become better when the purpose and intention follow the writing of Fromm. Conversely, bad relations can deteriorate by abuse of technology. Unexpected disruptions in negotiations can break them. Social balances can be significantly altered by the mother-in-law scenarios or the airing of misplaced remarks of politicians during campaigns.

 

In the public domain, the media does influence social tastes and choices. The advertising industry thrives on this premise. Unfortunately, the lesser forces in the society resort to misstatements, false ads, and weird misrepresentations. Truth is as easily killed by greed as beauty is prostituted by thugs or as monuments are driven over by tanks. Technically robust media is devoid of such human deficiencies. The nature and types of distortions, delays, and noises have scientific properties that can be compensated or statistically estimated.

In the era of modern technologies, students are exposed to cutting-edge online learning platforms. Various factors indulge in this mode of teaching/learning and activities to monitor their behavioral state which is quite difficult (Dutta et al., 2019). Determining a student’s state and behavior engaged in a daily routine is complex and it is sorted by the use of brain computer interface technology which determines an individual’s capabilities that recognizes their cognitive state based on their tasks which they carryout (Huang et al., 2016). The cognitive assessment is limited to one’s working memory capacity. It is the memory in which the information is retained and manipulated by the brain by considering various aspects. These aspects range from performing a certain task to crucial decision-making manipulation that is being processed by the brain (Knyazev et al., 2017). The consideration of individuals’ working memory plays a viable role since the increase in such memory would result in a state of confusion or it makes the learning ability less important. This may lead the subjects to be affected through behavioral and mental state processing. With the help of state-of-the-art tools like electroencephalogram (EEG) device are being implemented to analyze these signals in real-time. The purpose is to build an effective model that serves the goal for understanding the user perception, personal interceptions, and provide a predictive analysis subsequently that is designed tediously (Appriou et al., 2018; Lin & Kao, 2018). The conceptual model illustrates and bridges the existing gap by identifying and addressing cognitive issues that provides a feasible performance.

 

 

The EEG signals have several frequency ranges (Fig. 4.1) ranging from lowest to the highest frequency levels. The delta (0.5–4 Hz) waves are determined during deep meditation and theta (4–8 Hz) waves occur mostly in sleep which paves further intuitions and senses are the lowest frequency ranges. The coherent range states alpha and beta have peak levels in EEG signal ranging around 8–13 Hz and 13–30 Hz, respectively. The alpha range is a viable parameter that capsulates the cognitive pattern (Xue et al., 2016). Each attributes play a special role in cognitive activities. EEG signal reception is a combination of tangible information that includes several artifacts. Unresolved components are filtered using least mean squares (LMS) through a pretreatment. discrete wavelet transform (DWT) decomposition is used to nonstatic signals to obtain the spectral and statistical factors such as entropy, energy, mean value, etc., (Ilyas et al., 2016; Noshadi et al., 2016; Xue et al., 2016; Zhiweiand & Minfen, 2017). Clustering is done via a fuzzy fractal dimension (FFD) measure, provided that extracted size parameters are being indulged. Implementing advanced techniques of deep learning, classification can be performed to enhance neural networks like convolutional neural network (CNN) that are able to derive various constraints as the concentrated level (i.e., higher and lower). Therefore, the purpose of the chapter is toward improving the learning system that is present in the recent days through online by analyzing the signals obtained from the brain via EEG with various learning tasks.

These are the 12 things most likely to destroy the world

new report claims to offer "the first science-based list of global risks with a potentially infinite impact where in extreme cases all human life could end." Those risks, the authors argue, include everything from climate change to supervolcanoes to artificial intelligence.

By "infinite impact," the authors — led by Dennis Pamlin of the Global Challenge Foundation and Stuart Armstrong of the Future of Humanity Institute — mean risks capable of either causing human extinction or leading to a situation where "civilization collapses to a state of great suffering and does not recover."

 

The good news is that the authors aren't convinced we're doomed. Pamlin and Armstrong are of the view that humans have a long time left — possibly millions of years: "The dinosaurs were around for 135 million years and if we are intelligent, there are good chances that we could live for much longer," they write. Roughly 108 billion people have ever been alive, and Pamlin and Armstrong estimate that, if humanity lasts for 50 million years, the total number of humans who will ever live is more like 3 quadrillion.

That's an optimistic assessment of humanity's prospects, but it also means that if something happens to make humans go extinct, the moral harm done will be immense. Guarding against events with even a small probability of causing that is worthwhile.

 

So the report's authors conducted a scientific literature review and identified 12 plausible ways it could happen:

 

-Catastrophic climate change

The scenario that the authors envision here isn't 2ºC (3.6ºF) warming, of the kind that climate negotiators have been fighting to avoid for decades. It's warming of 4 or 6ºC (7.2 or 10.8ºF), a truly horrific scenario which it's not clear humans could survive.

According to a 2013 World Bank report, "there is also no certainty that adaptation to a 4°C world is possible." Warming at that level would displace huge numbers of people as sea levels rise and coastal areas become submerged. Agriculture would take a giant hit.

 

Pamlin and Armstrong also express concern about geoengineering. In such an extreme warming scenario, things like spraying sulfate particles into the stratosphere to cool the Earth may start to look attractive to policymakers or even private individuals. But the risks are unknown, and Pamlin and Armstrong conclude that "the biggest challenge is that geoengineering may backfire and simply make matters worse."

 

-Nuclear war

The "good" news here is that nuclear war could only end humanity under very special circumstances. Limited exchanges, like the US's bombings of Hiroshima and Nagasaki in World War II, would be humanitarian catastrophes but couldn't render humans extinct.

Even significantly larger exchanges fall short of the level of impact Pamlin and Armstrong require. "Even if the entire populations of Europe, Russia and the USA were directly wiped out in a nuclear war — an outcome that some studies have shown to be physically impossible, given population dispersal and the number of missiles in existence — that would not raise the war to the first level of impact, which requires > 2 billion affected," Pamlin and Armstrong write.

 

So why does nuclear war make the list? Because of the possibility of nuclear winter. That is, if enough nukes are detonated, world temperatures would fall dramatically and quickly, disrupting food production and possibly rendering human life impossible. It's unclear if that's even possible, or how big a war you'd need to trigger it, but if it is a possibility, that means a massive nuclear exchange is a possible cause of human extinction.

 

 

-Global pandemic

As with nuclear war, not just any pandemic qualifies. Past pandemics — like the Black Death or the Spanish flu of 1918 — have killed tens of millions of people, but failed to halt civilization. The authors are interested in an even more catastrophic scenario.

 

Is that plausible? Medicine has improved dramatically since the Spanish flu. But on the flip side, transportation across great distances has increased, and more people are living in dense urban areas. That makes worldwide transmission much more of a possibility.

 

Even a pandemic that killed off most of humanity would surely leave a few survivors who have immunity to the disease. The risk isn't that a single contagion kills everyone; it's that a pandemic kills enough people that the rudiments of civilization — agriculture, principally — can't be maintained and the survivors die off.

 

-Global system collapse

-This is a vague one, but it basically means the world's economic and political systems collapse, by way of something like "a severe, prolonged depression with high bankruptcy rates and high unemployment, a breakdown in normal commerce caused by hyperinflation, or even an economically-caused sharp increase in the death rate and perhaps even a decline in population."

-The paper also mentions other possibilities, like a coronal mass ejection from the Sun that disrupts electrical systems on Earth.

 

That said, it's unclear whether these things would pose an existential threat. Humanity has survived past economic downturns — even massive ones like the Great Depression. An economic collapse would have to be considerably more massive than that to risk human extinction or to kill enough people that the survivors couldn't recover.

 

Major asteroid impact

Major asteroid impacts have caused large-scale extinction on Earth in the past. Most famously, the Chicxulub impact 66 million years ago is widely believed to have caused the mass extinction that wiped out the dinosaurs (an alternative theory blames volcanic eruptions, about which more in a second). Theoretically, a future impact could have a similar effect.

 

Supervolcano

As with asteroids, there's historical precedent for volcanic eruptions causing mass extinction. The Permian–Triassic extinction event, which rendered something like 90 percent of the Earth's species extinct, is believed to have been caused by an eruption.

 

Eruptions can cause significant global cooling and can disrupt agricultural production. They're also basically impossible to prevent, at least today, though they're also extremely rare. The authors conclude another Permian-Triassic level eruption is "extremely unlikely on human timescales, but the damage from even a smaller eruption could affect the climate, damage the biosphere, affect food supplies, and create political instability."

 

As with pandemics, the risk isn't so much that the event itself will kill everyone so much as that it'd make continued survival untenable for those who lived through it.

 

Synthetic biology

This isn't a risk today, but it could be in the future. Synthetic biology is an emerging scientific field that focuses on the creation of biological systems, including artificial life.

 

The hypothetical danger is that the tools of synthetic biology could be used to engineer a supervirus or superbacteria that is more infectious and capable of mass destruction than one that evolved naturally. Most likely, such an organism would be created as a biological weapon, either for a military or a non-state actor.

 

The risk is that such a weapon would either be used in warfare or a terrorist attack, or else leak from a lab accidentally. Either scenario could wind up threatening humanity as a whole if the bioweapon spreads beyond the initial target and becomes a global problem. As with regular pandemics, actual extinction would only happen if survivors were unable to adapt to a giant population decline.

 

Nanotechnology

This is another potential risk in the future. The concern here is that nanotech democratizes industrial production, thus giving many more actors the ability to develop highly destructive weapons. "Of particular relevance is whether nanotechnology allows rapid uranium extraction and isotope separation and the construction of nuclear bombs, which would increase the severity of the consequent conflicts," Pamlin and Armstrong write. Traditional balance-of-power dynamics wouldn't apply if individuals and small groups were capable of amassing large, powerful arsenals.

 

There's also a concern that self-replicating nanotech would create a "gray goo" scenario, in which it grows out of control and encroaches upon resources humans depend on, causing mass disruption and potentially civilizational collapse.

 

 

Artificial Intelligence

The report is also concerned with the possibility of exponential advances in artificial intelligence. Once computer programs grow advanced enough to teach themselves computer science, they could use that knowledge to improve themselves, causing a spiral of ever-increasing superintelligence.

 

If AI remains friendly to humans, this would be a very good thing indeed, and has the prospect to speed up research in a variety of domains. The risk is that AI has little use for humans and either out of malevolence or perceived necessity destroys us all.

 

Future bad governance

This is perhaps the vaguest item on the list — a kind of meta-risk. Most of the problems enumerated above would require some kind of global coordinated action to address. Climate change is the most prominent example, but in the future things like nanotech and AI regulation would need to be coordinated internationally.

 

The danger is that governance structures often fail and sometimes wind up exacerbating the problems they were trying to fix. A policy failure in dealing with a threat that could cause human extinction would thus have hugely negative consequences.

 

And after the end of the wars of the past that destroyed the planet, life returned as it was thanks to modern methods of recycling, construction, industry, agriculture and medicine.  And transportation and modern technology, and this is how we should live in life with knowledge and work. Dreams that are easy to achieve and dreams that are close to impossible will be realized, and this is thanks to useful knowledge, work and cooperation. Life must be classy and full of creativity. We will wait to see you in the future.  Next 2100“

 welcome  next gerenations „