Logo

Essay on Modern Technology

Students are often asked to write an essay on Modern Technology in their schools and colleges. And if you’re also looking for the same, we have created 100-word, 250-word, and 500-word essays on the topic.

Let’s take a look…

100 Words Essay on Modern Technology

Introduction to modern technology.

Modern technology refers to the recent advancements and innovations that have made our lives easier. It includes computers, smartphones, the internet, and many more.

Benefits of Modern Technology

Modern technology has numerous benefits. It helps us communicate with people worldwide, provides information at our fingertips, and makes learning fun and interactive.

Challenges of Modern Technology

Despite the benefits, modern technology also poses some challenges. It can lead to addiction and loss of privacy. It’s crucial to use technology wisely to avoid these issues.

In conclusion, modern technology has changed our lives significantly. It’s our responsibility to use it responsibly and reap its benefits.

250 Words Essay on Modern Technology

The advent of modern technology.

Modern technology, an offshoot of the ceaseless human quest for innovation, has become an integral part of our lives. It has not only revolutionized communication and information dissemination but also transformed the way we live, work, and play.

Impact on Communication and Information

The advent of the Internet and smartphones has democratized information, making it accessible to everyone, everywhere. Social media platforms have given a voice to the voiceless, enabling a global dialogue that transcends geographical boundaries. Additionally, the emergence of artificial intelligence and machine learning has opened up new frontiers in data analysis and decision-making processes.

Transforming Daily Life

Modern technology has also significantly altered our daily routines. Smart homes, equipped with automated devices, have enhanced comfort and convenience. Wearable technology monitors our health, encouraging proactive wellness. Furthermore, e-commerce platforms and digital payment systems have streamlined shopping and financial transactions.

Work and Play in the Digital Age

In the workspace, technology has automated repetitive tasks, freeing up time for creative and strategic thinking. Remote working, made possible by digital tools, has blurred the lines between office and home. Meanwhile, in the realm of entertainment, virtual and augmented reality technologies have redefined our concept of play, immersing us in interactive digital worlds.

The Double-edged Sword

However, this technological revolution is a double-edged sword. While it brings countless benefits, it also presents challenges such as privacy concerns, cybercrime, and digital addiction. It is, therefore, crucial to navigate this digital landscape with caution, leveraging its advantages while mitigating its potential risks.

500 Words Essay on Modern Technology

In the contemporary era, modern technology has emerged as a significant facet of human life. It has revolutionized the way we communicate, learn, work, and entertain ourselves. The rapid evolution of technology, from the advent of the internet to the development of artificial intelligence, has had profound implications on society, economy, and culture.

The Impact of Modern Technology on Communication

Modern technology has drastically transformed the realm of communication. The rise of social media platforms and instant messaging apps has made it possible to connect with people across the globe in real time. Emails and video conferences have replaced traditional letters and face-to-face meetings, making communication faster and more efficient. However, this digital revolution has also raised concerns about privacy and the authenticity of information disseminated online.

Modern Technology in Education

Modern technology in the workplace, modern technology and entertainment.

In the realm of entertainment, modern technology has given rise to new forms of media and has changed the way we consume content. Streaming platforms have challenged traditional television, and online gaming has become a global phenomenon. While these advancements have democratized entertainment, they have also raised questions about digital addiction and mental health.

Conclusion: The Future of Modern Technology

If you’re looking for more, here are essays on other interesting topics:

Apart from these, you can look at all the essays by clicking here .

Leave a Reply Cancel reply

How artificial intelligence is transforming the world

Subscribe to the center for technology innovation newsletter, darrell m. west and darrell m. west senior fellow - center for technology innovation , douglas dillon chair in governmental studies john r. allen john r. allen.

April 24, 2018

Artificial intelligence (AI) is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decision making—and already it is transforming every walk of life. In this report, Darrell West and John Allen discuss AI’s application across a variety of sectors, address issues in its development, and offer recommendations for getting the most out of AI while still protecting important human values.

Table of Contents I. Qualities of artificial intelligence II. Applications in diverse sectors III. Policy, regulatory, and ethical issues IV. Recommendations V. Conclusion

  • 49 min read

Most people are not very familiar with the concept of artificial intelligence (AI). As an illustration, when 1,500 senior business leaders in the United States in 2017 were asked about AI, only 17 percent said they were familiar with it. 1 A number of them were not sure what it was or how it would affect their particular companies. They understood there was considerable potential for altering business processes, but were not clear how AI could be deployed within their own organizations.

Despite its widespread lack of familiarity, AI is a technology that is transforming every walk of life. It is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decisionmaking. Our hope through this comprehensive overview is to explain AI to an audience of policymakers, opinion leaders, and interested observers, and demonstrate how AI already is altering the world and raising important questions for society, the economy, and governance.

In this paper, we discuss novel applications in finance, national security, health care, criminal justice, transportation, and smart cities, and address issues such as data access problems, algorithmic bias, AI ethics and transparency, and legal liability for AI decisions. We contrast the regulatory approaches of the U.S. and European Union, and close by making a number of recommendations for getting the most out of AI while still protecting important human values. 2

In order to maximize AI benefits, we recommend nine steps for going forward:

  • Encourage greater data access for researchers without compromising users’ personal privacy,
  • invest more government funding in unclassified AI research,
  • promote new models of digital education and AI workforce development so employees have the skills needed in the 21 st -century economy,
  • create a federal AI advisory committee to make policy recommendations,
  • engage with state and local officials so they enact effective policies,
  • regulate broad AI principles rather than specific algorithms,
  • take bias complaints seriously so AI does not replicate historic injustice, unfairness, or discrimination in data or algorithms,
  • maintain mechanisms for human oversight and control, and
  • penalize malicious AI behavior and promote cybersecurity.

Qualities of artificial intelligence

Although there is no uniformly agreed upon definition, AI generally is thought to refer to “machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment and intention.” 3  According to researchers Shubhendu and Vijay, these software systems “make decisions which normally require [a] human level of expertise” and help people anticipate problems or deal with issues as they come up. 4 As such, they operate in an intentional, intelligent, and adaptive manner.

Intentionality

Artificial intelligence algorithms are designed to make decisions, often using real-time data. They are unlike passive machines that are capable only of mechanical or predetermined responses. Using sensors, digital data, or remote inputs, they combine information from a variety of different sources, analyze the material instantly, and act on the insights derived from those data. With massive improvements in storage systems, processing speeds, and analytic techniques, they are capable of tremendous sophistication in analysis and decisionmaking.

Artificial intelligence is already altering the world and raising important questions for society, the economy, and governance.

Intelligence

AI generally is undertaken in conjunction with machine learning and data analytics. 5 Machine learning takes data and looks for underlying trends. If it spots something that is relevant for a practical problem, software designers can take that knowledge and use it to analyze specific issues. All that is required are data that are sufficiently robust that algorithms can discern useful patterns. Data can come in the form of digital information, satellite imagery, visual information, text, or unstructured data.

Adaptability

AI systems have the ability to learn and adapt as they make decisions. In the transportation area, for example, semi-autonomous vehicles have tools that let drivers and vehicles know about upcoming congestion, potholes, highway construction, or other possible traffic impediments. Vehicles can take advantage of the experience of other vehicles on the road, without human involvement, and the entire corpus of their achieved “experience” is immediately and fully transferable to other similarly configured vehicles. Their advanced algorithms, sensors, and cameras incorporate experience in current operations, and use dashboards and visual displays to present information in real time so human drivers are able to make sense of ongoing traffic and vehicular conditions. And in the case of fully autonomous vehicles, advanced systems can completely control the car or truck, and make all the navigational decisions.

Related Content

Jack Karsten, Darrell M. West

October 26, 2015

Makada Henry-Nickie

November 16, 2017

Sunil Johal, Daniel Araya

February 28, 2017

Applications in diverse sectors

AI is not a futuristic vision, but rather something that is here today and being integrated with and deployed into a variety of sectors. This includes fields such as finance, national security, health care, criminal justice, transportation, and smart cities. There are numerous examples where AI already is making an impact on the world and augmenting human capabilities in significant ways. 6

One of the reasons for the growing role of AI is the tremendous opportunities for economic development that it presents. A project undertaken by PriceWaterhouseCoopers estimated that “artificial intelligence technologies could increase global GDP by $15.7 trillion, a full 14%, by 2030.” 7 That includes advances of $7 trillion in China, $3.7 trillion in North America, $1.8 trillion in Northern Europe, $1.2 trillion for Africa and Oceania, $0.9 trillion in the rest of Asia outside of China, $0.7 trillion in Southern Europe, and $0.5 trillion in Latin America. China is making rapid strides because it has set a national goal of investing $150 billion in AI and becoming the global leader in this area by 2030.

Meanwhile, a McKinsey Global Institute study of China found that “AI-led automation can give the Chinese economy a productivity injection that would add 0.8 to 1.4 percentage points to GDP growth annually, depending on the speed of adoption.” 8 Although its authors found that China currently lags the United States and the United Kingdom in AI deployment, the sheer size of its AI market gives that country tremendous opportunities for pilot testing and future development.

Investments in financial AI in the United States tripled between 2013 and 2014 to a total of $12.2 billion. 9 According to observers in that sector, “Decisions about loans are now being made by software that can take into account a variety of finely parsed data about a borrower, rather than just a credit score and a background check.” 10 In addition, there are so-called robo-advisers that “create personalized investment portfolios, obviating the need for stockbrokers and financial advisers.” 11 These advances are designed to take the emotion out of investing and undertake decisions based on analytical considerations, and make these choices in a matter of minutes.

A prominent example of this is taking place in stock exchanges, where high-frequency trading by machines has replaced much of human decisionmaking. People submit buy and sell orders, and computers match them in the blink of an eye without human intervention. Machines can spot trading inefficiencies or market differentials on a very small scale and execute trades that make money according to investor instructions. 12 Powered in some places by advanced computing, these tools have much greater capacities for storing information because of their emphasis not on a zero or a one, but on “quantum bits” that can store multiple values in each location. 13 That dramatically increases storage capacity and decreases processing times.

Fraud detection represents another way AI is helpful in financial systems. It sometimes is difficult to discern fraudulent activities in large organizations, but AI can identify abnormalities, outliers, or deviant cases requiring additional investigation. That helps managers find problems early in the cycle, before they reach dangerous levels. 14

National security

AI plays a substantial role in national defense. Through its Project Maven, the American military is deploying AI “to sift through the massive troves of data and video captured by surveillance and then alert human analysts of patterns or when there is abnormal or suspicious activity.” 15 According to Deputy Secretary of Defense Patrick Shanahan, the goal of emerging technologies in this area is “to meet our warfighters’ needs and to increase [the] speed and agility [of] technology development and procurement.” 16

Artificial intelligence will accelerate the traditional process of warfare so rapidly that a new term has been coined: hyperwar.

The big data analytics associated with AI will profoundly affect intelligence analysis, as massive amounts of data are sifted in near real time—if not eventually in real time—thereby providing commanders and their staffs a level of intelligence analysis and productivity heretofore unseen. Command and control will similarly be affected as human commanders delegate certain routine, and in special circumstances, key decisions to AI platforms, reducing dramatically the time associated with the decision and subsequent action. In the end, warfare is a time competitive process, where the side able to decide the fastest and move most quickly to execution will generally prevail. Indeed, artificially intelligent intelligence systems, tied to AI-assisted command and control systems, can move decision support and decisionmaking to a speed vastly superior to the speeds of the traditional means of waging war. So fast will be this process, especially if coupled to automatic decisions to launch artificially intelligent autonomous weapons systems capable of lethal outcomes, that a new term has been coined specifically to embrace the speed at which war will be waged: hyperwar.

While the ethical and legal debate is raging over whether America will ever wage war with artificially intelligent autonomous lethal systems, the Chinese and Russians are not nearly so mired in this debate, and we should anticipate our need to defend against these systems operating at hyperwar speeds. The challenge in the West of where to position “humans in the loop” in a hyperwar scenario will ultimately dictate the West’s capacity to be competitive in this new form of conflict. 17

Just as AI will profoundly affect the speed of warfare, the proliferation of zero day or zero second cyber threats as well as polymorphic malware will challenge even the most sophisticated signature-based cyber protection. This forces significant improvement to existing cyber defenses. Increasingly, vulnerable systems are migrating, and will need to shift to a layered approach to cybersecurity with cloud-based, cognitive AI platforms. This approach moves the community toward a “thinking” defensive capability that can defend networks through constant training on known threats. This capability includes DNA-level analysis of heretofore unknown code, with the possibility of recognizing and stopping inbound malicious code by recognizing a string component of the file. This is how certain key U.S.-based systems stopped the debilitating “WannaCry” and “Petya” viruses.

Preparing for hyperwar and defending critical cyber networks must become a high priority because China, Russia, North Korea, and other countries are putting substantial resources into AI. In 2017, China’s State Council issued a plan for the country to “build a domestic industry worth almost $150 billion” by 2030. 18 As an example of the possibilities, the Chinese search firm Baidu has pioneered a facial recognition application that finds missing people. In addition, cities such as Shenzhen are providing up to $1 million to support AI labs. That country hopes AI will provide security, combat terrorism, and improve speech recognition programs. 19 The dual-use nature of many AI algorithms will mean AI research focused on one sector of society can be rapidly modified for use in the security sector as well. 20

Health care

AI tools are helping designers improve computational sophistication in health care. For example, Merantix is a German company that applies deep learning to medical issues. It has an application in medical imaging that “detects lymph nodes in the human body in Computer Tomography (CT) images.” 21 According to its developers, the key is labeling the nodes and identifying small lesions or growths that could be problematic. Humans can do this, but radiologists charge $100 per hour and may be able to carefully read only four images an hour. If there were 10,000 images, the cost of this process would be $250,000, which is prohibitively expensive if done by humans.

What deep learning can do in this situation is train computers on data sets to learn what a normal-looking versus an irregular-appearing lymph node is. After doing that through imaging exercises and honing the accuracy of the labeling, radiological imaging specialists can apply this knowledge to actual patients and determine the extent to which someone is at risk of cancerous lymph nodes. Since only a few are likely to test positive, it is a matter of identifying the unhealthy versus healthy node.

AI has been applied to congestive heart failure as well, an illness that afflicts 10 percent of senior citizens and costs $35 billion each year in the United States. AI tools are helpful because they “predict in advance potential challenges ahead and allocate resources to patient education, sensing, and proactive interventions that keep patients out of the hospital.” 22

Criminal justice

AI is being deployed in the criminal justice area. The city of Chicago has developed an AI-driven “Strategic Subject List” that analyzes people who have been arrested for their risk of becoming future perpetrators. It ranks 400,000 people on a scale of 0 to 500, using items such as age, criminal activity, victimization, drug arrest records, and gang affiliation. In looking at the data, analysts found that youth is a strong predictor of violence, being a shooting victim is associated with becoming a future perpetrator, gang affiliation has little predictive value, and drug arrests are not significantly associated with future criminal activity. 23

Judicial experts claim AI programs reduce human bias in law enforcement and leads to a fairer sentencing system. R Street Institute Associate Caleb Watney writes:

Empirically grounded questions of predictive risk analysis play to the strengths of machine learning, automated reasoning and other forms of AI. One machine-learning policy simulation concluded that such programs could be used to cut crime up to 24.8 percent with no change in jailing rates, or reduce jail populations by up to 42 percent with no increase in crime rates. 24

However, critics worry that AI algorithms represent “a secret system to punish citizens for crimes they haven’t yet committed. The risk scores have been used numerous times to guide large-scale roundups.” 25 The fear is that such tools target people of color unfairly and have not helped Chicago reduce the murder wave that has plagued it in recent years.

Despite these concerns, other countries are moving ahead with rapid deployment in this area. In China, for example, companies already have “considerable resources and access to voices, faces and other biometric data in vast quantities, which would help them develop their technologies.” 26 New technologies make it possible to match images and voices with other types of information, and to use AI on these combined data sets to improve law enforcement and national security. Through its “Sharp Eyes” program, Chinese law enforcement is matching video images, social media activity, online purchases, travel records, and personal identity into a “police cloud.” This integrated database enables authorities to keep track of criminals, potential law-breakers, and terrorists. 27 Put differently, China has become the world’s leading AI-powered surveillance state.

Transportation

Transportation represents an area where AI and machine learning are producing major innovations. Research by Cameron Kerry and Jack Karsten of the Brookings Institution has found that over $80 billion was invested in autonomous vehicle technology between August 2014 and June 2017. Those investments include applications both for autonomous driving and the core technologies vital to that sector. 28

Autonomous vehicles—cars, trucks, buses, and drone delivery systems—use advanced technological capabilities. Those features include automated vehicle guidance and braking, lane-changing systems, the use of cameras and sensors for collision avoidance, the use of AI to analyze information in real time, and the use of high-performance computing and deep learning systems to adapt to new circumstances through detailed maps. 29

Light detection and ranging systems (LIDARs) and AI are key to navigation and collision avoidance. LIDAR systems combine light and radar instruments. They are mounted on the top of vehicles that use imaging in a 360-degree environment from a radar and light beams to measure the speed and distance of surrounding objects. Along with sensors placed on the front, sides, and back of the vehicle, these instruments provide information that keeps fast-moving cars and trucks in their own lane, helps them avoid other vehicles, applies brakes and steering when needed, and does so instantly so as to avoid accidents.

Advanced software enables cars to learn from the experiences of other vehicles on the road and adjust their guidance systems as weather, driving, or road conditions change. This means that software is the key—not the physical car or truck itself.

Since these cameras and sensors compile a huge amount of information and need to process it instantly to avoid the car in the next lane, autonomous vehicles require high-performance computing, advanced algorithms, and deep learning systems to adapt to new scenarios. This means that software is the key, not the physical car or truck itself. 30 Advanced software enables cars to learn from the experiences of other vehicles on the road and adjust their guidance systems as weather, driving, or road conditions change. 31

Ride-sharing companies are very interested in autonomous vehicles. They see advantages in terms of customer service and labor productivity. All of the major ride-sharing companies are exploring driverless cars. The surge of car-sharing and taxi services—such as Uber and Lyft in the United States, Daimler’s Mytaxi and Hailo service in Great Britain, and Didi Chuxing in China—demonstrate the opportunities of this transportation option. Uber recently signed an agreement to purchase 24,000 autonomous cars from Volvo for its ride-sharing service. 32

However, the ride-sharing firm suffered a setback in March 2018 when one of its autonomous vehicles in Arizona hit and killed a pedestrian. Uber and several auto manufacturers immediately suspended testing and launched investigations into what went wrong and how the fatality could have occurred. 33 Both industry and consumers want reassurance that the technology is safe and able to deliver on its stated promises. Unless there are persuasive answers, this accident could slow AI advancements in the transportation sector.

Smart cities

Metropolitan governments are using AI to improve urban service delivery. For example, according to Kevin Desouza, Rashmi Krishnamurthy, and Gregory Dawson:

The Cincinnati Fire Department is using data analytics to optimize medical emergency responses. The new analytics system recommends to the dispatcher an appropriate response to a medical emergency call—whether a patient can be treated on-site or needs to be taken to the hospital—by taking into account several factors, such as the type of call, location, weather, and similar calls. 34

Since it fields 80,000 requests each year, Cincinnati officials are deploying this technology to prioritize responses and determine the best ways to handle emergencies. They see AI as a way to deal with large volumes of data and figure out efficient ways of responding to public requests. Rather than address service issues in an ad hoc manner, authorities are trying to be proactive in how they provide urban services.

Cincinnati is not alone. A number of metropolitan areas are adopting smart city applications that use AI to improve service delivery, environmental planning, resource management, energy utilization, and crime prevention, among other things. For its smart cities index, the magazine Fast Company ranked American locales and found Seattle, Boston, San Francisco, Washington, D.C., and New York City as the top adopters. Seattle, for example, has embraced sustainability and is using AI to manage energy usage and resource management. Boston has launched a “City Hall To Go” that makes sure underserved communities receive needed public services. It also has deployed “cameras and inductive loops to manage traffic and acoustic sensors to identify gun shots.” San Francisco has certified 203 buildings as meeting LEED sustainability standards. 35

Through these and other means, metropolitan areas are leading the country in the deployment of AI solutions. Indeed, according to a National League of Cities report, 66 percent of American cities are investing in smart city technology. Among the top applications noted in the report are “smart meters for utilities, intelligent traffic signals, e-governance applications, Wi-Fi kiosks, and radio frequency identification sensors in pavement.” 36

Policy, regulatory, and ethical issues

These examples from a variety of sectors demonstrate how AI is transforming many walks of human existence. The increasing penetration of AI and autonomous devices into many aspects of life is altering basic operations and decisionmaking within organizations, and improving efficiency and response times.

At the same time, though, these developments raise important policy, regulatory, and ethical issues. For example, how should we promote data access? How do we guard against biased or unfair data used in algorithms? What types of ethical principles are introduced through software programming, and how transparent should designers be about their choices? What about questions of legal liability in cases where algorithms cause harm? 37

The increasing penetration of AI into many aspects of life is altering decisionmaking within organizations and improving efficiency. At the same time, though, these developments raise important policy, regulatory, and ethical issues.

Data access problems

The key to getting the most out of AI is having a “data-friendly ecosystem with unified standards and cross-platform sharing.” AI depends on data that can be analyzed in real time and brought to bear on concrete problems. Having data that are “accessible for exploration” in the research community is a prerequisite for successful AI development. 38

According to a McKinsey Global Institute study, nations that promote open data sources and data sharing are the ones most likely to see AI advances. In this regard, the United States has a substantial advantage over China. Global ratings on data openness show that U.S. ranks eighth overall in the world, compared to 93 for China. 39

But right now, the United States does not have a coherent national data strategy. There are few protocols for promoting research access or platforms that make it possible to gain new insights from proprietary data. It is not always clear who owns data or how much belongs in the public sphere. These uncertainties limit the innovation economy and act as a drag on academic research. In the following section, we outline ways to improve data access for researchers.

Biases in data and algorithms

In some instances, certain AI systems are thought to have enabled discriminatory or biased practices. 40 For example, Airbnb has been accused of having homeowners on its platform who discriminate against racial minorities. A research project undertaken by the Harvard Business School found that “Airbnb users with distinctly African American names were roughly 16 percent less likely to be accepted as guests than those with distinctly white names.” 41

Racial issues also come up with facial recognition software. Most such systems operate by comparing a person’s face to a range of faces in a large database. As pointed out by Joy Buolamwini of the Algorithmic Justice League, “If your facial recognition data contains mostly Caucasian faces, that’s what your program will learn to recognize.” 42 Unless the databases have access to diverse data, these programs perform poorly when attempting to recognize African-American or Asian-American features.

Many historical data sets reflect traditional values, which may or may not represent the preferences wanted in a current system. As Buolamwini notes, such an approach risks repeating inequities of the past:

The rise of automation and the increased reliance on algorithms for high-stakes decisions such as whether someone get insurance or not, your likelihood to default on a loan or somebody’s risk of recidivism means this is something that needs to be addressed. Even admissions decisions are increasingly automated—what school our children go to and what opportunities they have. We don’t have to bring the structural inequalities of the past into the future we create. 43

AI ethics and transparency

Algorithms embed ethical considerations and value choices into program decisions. As such, these systems raise questions concerning the criteria used in automated decisionmaking. Some people want to have a better understanding of how algorithms function and what choices are being made. 44

In the United States, many urban schools use algorithms for enrollment decisions based on a variety of considerations, such as parent preferences, neighborhood qualities, income level, and demographic background. According to Brookings researcher Jon Valant, the New Orleans–based Bricolage Academy “gives priority to economically disadvantaged applicants for up to 33 percent of available seats. In practice, though, most cities have opted for categories that prioritize siblings of current students, children of school employees, and families that live in school’s broad geographic area.” 45 Enrollment choices can be expected to be very different when considerations of this sort come into play.

Depending on how AI systems are set up, they can facilitate the redlining of mortgage applications, help people discriminate against individuals they don’t like, or help screen or build rosters of individuals based on unfair criteria. The types of considerations that go into programming decisions matter a lot in terms of how the systems operate and how they affect customers. 46

For these reasons, the EU is implementing the General Data Protection Regulation (GDPR) in May 2018. The rules specify that people have “the right to opt out of personally tailored ads” and “can contest ‘legal or similarly significant’ decisions made by algorithms and appeal for human intervention” in the form of an explanation of how the algorithm generated a particular outcome. Each guideline is designed to ensure the protection of personal data and provide individuals with information on how the “black box” operates. 47

Legal liability

There are questions concerning the legal liability of AI systems. If there are harms or infractions (or fatalities in the case of driverless cars), the operators of the algorithm likely will fall under product liability rules. A body of case law has shown that the situation’s facts and circumstances determine liability and influence the kind of penalties that are imposed. Those can range from civil fines to imprisonment for major harms. 48 The Uber-related fatality in Arizona will be an important test case for legal liability. The state actively recruited Uber to test its autonomous vehicles and gave the company considerable latitude in terms of road testing. It remains to be seen if there will be lawsuits in this case and who is sued: the human backup driver, the state of Arizona, the Phoenix suburb where the accident took place, Uber, software developers, or the auto manufacturer. Given the multiple people and organizations involved in the road testing, there are many legal questions to be resolved.

In non-transportation areas, digital platforms often have limited liability for what happens on their sites. For example, in the case of Airbnb, the firm “requires that people agree to waive their right to sue, or to join in any class-action lawsuit or class-action arbitration, to use the service.” By demanding that its users sacrifice basic rights, the company limits consumer protections and therefore curtails the ability of people to fight discrimination arising from unfair algorithms. 49 But whether the principle of neutral networks holds up in many sectors is yet to be determined on a widespread basis.

Recommendations

In order to balance innovation with basic human values, we propose a number of recommendations for moving forward with AI. This includes improving data access, increasing government investment in AI, promoting AI workforce development, creating a federal advisory committee, engaging with state and local officials to ensure they enact effective policies, regulating broad objectives as opposed to specific algorithms, taking bias seriously as an AI issue, maintaining mechanisms for human control and oversight, and penalizing malicious behavior and promoting cybersecurity.

Improving data access

The United States should develop a data strategy that promotes innovation and consumer protection. Right now, there are no uniform standards in terms of data access, data sharing, or data protection. Almost all the data are proprietary in nature and not shared very broadly with the research community, and this limits innovation and system design. AI requires data to test and improve its learning capacity. 50 Without structured and unstructured data sets, it will be nearly impossible to gain the full benefits of artificial intelligence.

In general, the research community needs better access to government and business data, although with appropriate safeguards to make sure researchers do not misuse data in the way Cambridge Analytica did with Facebook information. There is a variety of ways researchers could gain data access. One is through voluntary agreements with companies holding proprietary data. Facebook, for example, recently announced a partnership with Stanford economist Raj Chetty to use its social media data to explore inequality. 51 As part of the arrangement, researchers were required to undergo background checks and could only access data from secured sites in order to protect user privacy and security.

In the U.S., there are no uniform standards in terms of data access, data sharing, or data protection. Almost all the data are proprietary in nature and not shared very broadly with the research community, and this limits innovation and system design.

Google long has made available search results in aggregated form for researchers and the general public. Through its “Trends” site, scholars can analyze topics such as interest in Trump, views about democracy, and perspectives on the overall economy. 52 That helps people track movements in public interest and identify topics that galvanize the general public.

Twitter makes much of its tweets available to researchers through application programming interfaces, commonly referred to as APIs. These tools help people outside the company build application software and make use of data from its social media platform. They can study patterns of social media communications and see how people are commenting on or reacting to current events.

In some sectors where there is a discernible public benefit, governments can facilitate collaboration by building infrastructure that shares data. For example, the National Cancer Institute has pioneered a data-sharing protocol where certified researchers can query health data it has using de-identified information drawn from clinical data, claims information, and drug therapies. That enables researchers to evaluate efficacy and effectiveness, and make recommendations regarding the best medical approaches, without compromising the privacy of individual patients.

There could be public-private data partnerships that combine government and business data sets to improve system performance. For example, cities could integrate information from ride-sharing services with its own material on social service locations, bus lines, mass transit, and highway congestion to improve transportation. That would help metropolitan areas deal with traffic tie-ups and assist in highway and mass transit planning.

Some combination of these approaches would improve data access for researchers, the government, and the business community, without impinging on personal privacy. As noted by Ian Buck, the vice president of NVIDIA, “Data is the fuel that drives the AI engine. The federal government has access to vast sources of information. Opening access to that data will help us get insights that will transform the U.S. economy.” 53 Through its Data.gov portal, the federal government already has put over 230,000 data sets into the public domain, and this has propelled innovation and aided improvements in AI and data analytic technologies. 54 The private sector also needs to facilitate research data access so that society can achieve the full benefits of artificial intelligence.

Increase government investment in AI

According to Greg Brockman, the co-founder of OpenAI, the U.S. federal government invests only $1.1 billion in non-classified AI technology. 55 That is far lower than the amount being spent by China or other leading nations in this area of research. That shortfall is noteworthy because the economic payoffs of AI are substantial. In order to boost economic development and social innovation, federal officials need to increase investment in artificial intelligence and data analytics. Higher investment is likely to pay for itself many times over in economic and social benefits. 56

Promote digital education and workforce development

As AI applications accelerate across many sectors, it is vital that we reimagine our educational institutions for a world where AI will be ubiquitous and students need a different kind of training than they currently receive. Right now, many students do not receive instruction in the kinds of skills that will be needed in an AI-dominated landscape. For example, there currently are shortages of data scientists, computer scientists, engineers, coders, and platform developers. These are skills that are in short supply; unless our educational system generates more people with these capabilities, it will limit AI development.

For these reasons, both state and federal governments have been investing in AI human capital. For example, in 2017, the National Science Foundation funded over 6,500 graduate students in computer-related fields and has launched several new initiatives designed to encourage data and computer science at all levels from pre-K to higher and continuing education. 57 The goal is to build a larger pipeline of AI and data analytic personnel so that the United States can reap the full advantages of the knowledge revolution.

But there also needs to be substantial changes in the process of learning itself. It is not just technical skills that are needed in an AI world but skills of critical reasoning, collaboration, design, visual display of information, and independent thinking, among others. AI will reconfigure how society and the economy operate, and there needs to be “big picture” thinking on what this will mean for ethics, governance, and societal impact. People will need the ability to think broadly about many questions and integrate knowledge from a number of different areas.

One example of new ways to prepare students for a digital future is IBM’s Teacher Advisor program, utilizing Watson’s free online tools to help teachers bring the latest knowledge into the classroom. They enable instructors to develop new lesson plans in STEM and non-STEM fields, find relevant instructional videos, and help students get the most out of the classroom. 58 As such, they are precursors of new educational environments that need to be created.

Create a federal AI advisory committee

Federal officials need to think about how they deal with artificial intelligence. As noted previously, there are many issues ranging from the need for improved data access to addressing issues of bias and discrimination. It is vital that these and other concerns be considered so we gain the full benefits of this emerging technology.

In order to move forward in this area, several members of Congress have introduced the “Future of Artificial Intelligence Act,” a bill designed to establish broad policy and legal principles for AI. It proposes the secretary of commerce create a federal advisory committee on the development and implementation of artificial intelligence. The legislation provides a mechanism for the federal government to get advice on ways to promote a “climate of investment and innovation to ensure the global competitiveness of the United States,” “optimize the development of artificial intelligence to address the potential growth, restructuring, or other changes in the United States workforce,” “support the unbiased development and application of artificial intelligence,” and “protect the privacy rights of individuals.” 59

Among the specific questions the committee is asked to address include the following: competitiveness, workforce impact, education, ethics training, data sharing, international cooperation, accountability, machine learning bias, rural impact, government efficiency, investment climate, job impact, bias, and consumer impact. The committee is directed to submit a report to Congress and the administration 540 days after enactment regarding any legislative or administrative action needed on AI.

This legislation is a step in the right direction, although the field is moving so rapidly that we would recommend shortening the reporting timeline from 540 days to 180 days. Waiting nearly two years for a committee report will certainly result in missed opportunities and a lack of action on important issues. Given rapid advances in the field, having a much quicker turnaround time on the committee analysis would be quite beneficial.

Engage with state and local officials

States and localities also are taking action on AI. For example, the New York City Council unanimously passed a bill that directed the mayor to form a taskforce that would “monitor the fairness and validity of algorithms used by municipal agencies.” 60 The city employs algorithms to “determine if a lower bail will be assigned to an indigent defendant, where firehouses are established, student placement for public schools, assessing teacher performance, identifying Medicaid fraud and determine where crime will happen next.” 61

According to the legislation’s developers, city officials want to know how these algorithms work and make sure there is sufficient AI transparency and accountability. In addition, there is concern regarding the fairness and biases of AI algorithms, so the taskforce has been directed to analyze these issues and make recommendations regarding future usage. It is scheduled to report back to the mayor on a range of AI policy, legal, and regulatory issues by late 2019.

Some observers already are worrying that the taskforce won’t go far enough in holding algorithms accountable. For example, Julia Powles of Cornell Tech and New York University argues that the bill originally required companies to make the AI source code available to the public for inspection, and that there be simulations of its decisionmaking using actual data. After criticism of those provisions, however, former Councilman James Vacca dropped the requirements in favor of a task force studying these issues. He and other city officials were concerned that publication of proprietary information on algorithms would slow innovation and make it difficult to find AI vendors who would work with the city. 62 It remains to be seen how this local task force will balance issues of innovation, privacy, and transparency.

Regulate broad objectives more than specific algorithms

The European Union has taken a restrictive stance on these issues of data collection and analysis. 63 It has rules limiting the ability of companies from collecting data on road conditions and mapping street views. Because many of these countries worry that people’s personal information in unencrypted Wi-Fi networks are swept up in overall data collection, the EU has fined technology firms, demanded copies of data, and placed limits on the material collected. 64 This has made it more difficult for technology companies operating there to develop the high-definition maps required for autonomous vehicles.

The GDPR being implemented in Europe place severe restrictions on the use of artificial intelligence and machine learning. According to published guidelines, “Regulations prohibit any automated decision that ‘significantly affects’ EU citizens. This includes techniques that evaluates a person’s ‘performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.’” 65 In addition, these new rules give citizens the right to review how digital services made specific algorithmic choices affecting people.

By taking a restrictive stance on issues of data collection and analysis, the European Union is putting its manufacturers and software designers at a significant disadvantage to the rest of the world.

If interpreted stringently, these rules will make it difficult for European software designers (and American designers who work with European counterparts) to incorporate artificial intelligence and high-definition mapping in autonomous vehicles. Central to navigation in these cars and trucks is tracking location and movements. Without high-definition maps containing geo-coded data and the deep learning that makes use of this information, fully autonomous driving will stagnate in Europe. Through this and other data protection actions, the European Union is putting its manufacturers and software designers at a significant disadvantage to the rest of the world.

It makes more sense to think about the broad objectives desired in AI and enact policies that advance them, as opposed to governments trying to crack open the “black boxes” and see exactly how specific algorithms operate. Regulating individual algorithms will limit innovation and make it difficult for companies to make use of artificial intelligence.

Take biases seriously

Bias and discrimination are serious issues for AI. There already have been a number of cases of unfair treatment linked to historic data, and steps need to be undertaken to make sure that does not become prevalent in artificial intelligence. Existing statutes governing discrimination in the physical economy need to be extended to digital platforms. That will help protect consumers and build confidence in these systems as a whole.

For these advances to be widely adopted, more transparency is needed in how AI systems operate. Andrew Burt of Immuta argues, “The key problem confronting predictive analytics is really transparency. We’re in a world where data science operations are taking on increasingly important tasks, and the only thing holding them back is going to be how well the data scientists who train the models can explain what it is their models are doing.” 66

Maintaining mechanisms for human oversight and control

Some individuals have argued that there needs to be avenues for humans to exercise oversight and control of AI systems. For example, Allen Institute for Artificial Intelligence CEO Oren Etzioni argues there should be rules for regulating these systems. First, he says, AI must be governed by all the laws that already have been developed for human behavior, including regulations concerning “cyberbullying, stock manipulation or terrorist threats,” as well as “entrap[ping] people into committing crimes.” Second, he believes that these systems should disclose they are automated systems and not human beings. Third, he states, “An A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information.” 67 His rationale is that these tools store so much data that people have to be cognizant of the privacy risks posed by AI.

In the same vein, the IEEE Global Initiative has ethical guidelines for AI and autonomous systems. Its experts suggest that these models be programmed with consideration for widely accepted human norms and rules for behavior. AI algorithms need to take into effect the importance of these norms, how norm conflict can be resolved, and ways these systems can be transparent about norm resolution. Software designs should be programmed for “nondeception” and “honesty,” according to ethics experts. When failures occur, there must be mitigation mechanisms to deal with the consequences. In particular, AI must be sensitive to problems such as bias, discrimination, and fairness. 68

A group of machine learning experts claim it is possible to automate ethical decisionmaking. Using the trolley problem as a moral dilemma, they ask the following question: If an autonomous car goes out of control, should it be programmed to kill its own passengers or the pedestrians who are crossing the street? They devised a “voting-based system” that asked 1.3 million people to assess alternative scenarios, summarized the overall choices, and applied the overall perspective of these individuals to a range of vehicular possibilities. That allowed them to automate ethical decisionmaking in AI algorithms, taking public preferences into account. 69 This procedure, of course, does not reduce the tragedy involved in any kind of fatality, such as seen in the Uber case, but it provides a mechanism to help AI developers incorporate ethical considerations in their planning.

Penalize malicious behavior and promote cybersecurity

As with any emerging technology, it is important to discourage malicious treatment designed to trick software or use it for undesirable ends. 70 This is especially important given the dual-use aspects of AI, where the same tool can be used for beneficial or malicious purposes. The malevolent use of AI exposes individuals and organizations to unnecessary risks and undermines the virtues of the emerging technology. This includes behaviors such as hacking, manipulating algorithms, compromising privacy and confidentiality, or stealing identities. Efforts to hijack AI in order to solicit confidential information should be seriously penalized as a way to deter such actions. 71

In a rapidly changing world with many entities having advanced computing capabilities, there needs to be serious attention devoted to cybersecurity. Countries have to be careful to safeguard their own systems and keep other nations from damaging their security. 72 According to the U.S. Department of Homeland Security, a major American bank receives around 11 million calls a week at its service center. In order to protect its telephony from denial of service attacks, it uses a “machine learning-based policy engine [that] blocks more than 120,000 calls per month based on voice firewall policies including harassing callers, robocalls and potential fraudulent calls.” 73 This represents a way in which machine learning can help defend technology systems from malevolent attacks.

To summarize, the world is on the cusp of revolutionizing many sectors through artificial intelligence and data analytics. There already are significant deployments in finance, national security, health care, criminal justice, transportation, and smart cities that have altered decisionmaking, business models, risk mitigation, and system performance. These developments are generating substantial economic and social benefits.

The world is on the cusp of revolutionizing many sectors through artificial intelligence, but the way AI systems are developed need to be better understood due to the major implications these technologies will have for society as a whole.

Yet the manner in which AI systems unfold has major implications for society as a whole. It matters how policy issues are addressed, ethical conflicts are reconciled, legal realities are resolved, and how much transparency is required in AI and data analytic solutions. 74 Human choices about software development affect the way in which decisions are made and the manner in which they are integrated into organizational routines. Exactly how these processes are executed need to be better understood because they will have substantial impact on the general public soon, and for the foreseeable future. AI may well be a revolution in human affairs, and become the single most influential human innovation in history.

Note: We appreciate the research assistance of Grace Gilberg, Jack Karsten, Hillary Schaub, and Kristjan Tomasson on this project.

The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Support for this publication was generously provided by Amazon. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment. 

John R. Allen is a member of the Board of Advisors of Amida Technology and on the Board of Directors of Spark Cognition. Both companies work in fields discussed in this piece.

  • Thomas Davenport, Jeff Loucks, and David Schatsky, “Bullish on the Business Value of Cognitive” (Deloitte, 2017), p. 3 (www2.deloitte.com/us/en/pages/deloitte-analytics/articles/cognitive-technology-adoption-survey.html).
  • Luke Dormehl, Thinking Machines: The Quest for Artificial Intelligence—and Where It’s Taking Us Next (New York: Penguin–TarcherPerigee, 2017).
  • Shubhendu and Vijay, “Applicability of Artificial Intelligence in Different Fields of Life.”
  • Andrew McAfee and Erik Brynjolfsson, Machine Platform Crowd: Harnessing Our Digital Future (New York: Norton, 2017).
  • Portions of this paper draw on Darrell M. West, The Future of Work: Robots, AI, and Automation , Brookings Institution Press, 2018.
  • PriceWaterhouseCoopers, “Sizing the Prize: What’s the Real Value of AI for Your Business and How Can You Capitalise?” 2017.
  • Dominic Barton, Jonathan Woetzel, Jeongmin Seong, and Qinzheng Tian, “Artificial Intelligence: Implications for China” (New York: McKinsey Global Institute, April 2017), p. 1.
  • Nathaniel Popper, “Stocks and Bots,” New York Times Magazine , February 28, 2016.
  • Michael Lewis, Flash Boys: A Wall Street Revolt (New York: Norton, 2015).
  • Cade Metz, “In Quantum Computing Race, Yale Professors Battle Tech Giants,” New York Times , November 14, 2017, p. B3.
  • Executive Office of the President, “Artificial Intelligence, Automation, and the Economy,” December 2016, pp. 27-28.
  • Christian Davenport, “Future Wars May Depend as Much on Algorithms as on Ammunition, Report Says,” Washington Post , December 3, 2017.
  • John R. Allen and Amir Husain, “On Hyperwar,” Naval Institute Proceedings , July 17, 2017, pp. 30-36.
  • Paul Mozur, “China Sets Goal to Lead in Artificial Intelligence,” New York Times , July 21, 2017, p. B1.
  • Paul Mozur and John Markoff, “Is China Outsmarting American Artificial Intelligence?” New York Times , May 28, 2017.
  • Economist , “America v China: The Battle for Digital Supremacy,” March 15, 2018.
  • Rasmus Rothe, “Applying Deep Learning to Real-World Problems,” Medium , May 23, 2017.
  • Eric Horvitz, “Reflections on the Status and Future of Artificial Intelligence,” Testimony before the U.S. Senate Subcommittee on Space, Science, and Competitiveness, November 30, 2016, p. 5.
  • Jeff Asher and Rob Arthur, “Inside the Algorithm That Tries to Predict Gun Violence in Chicago,” New York Times Upshot , June 13, 2017.
  • Caleb Watney, “It’s Time for our Justice System to Embrace Artificial Intelligence,” TechTank (blog), Brookings Institution, July 20, 2017.
  • Asher and Arthur, “Inside the Algorithm That Tries to Predict Gun Violence in Chicago.”
  • Paul Mozur and Keith Bradsher, “China’s A.I. Advances Help Its Tech Industry, and State Security,” New York Times , December 3, 2017.
  • Simon Denyer, “China’s Watchful Eye,” Washington Post , January 7, 2018.
  • Cameron Kerry and Jack Karsten, “Gauging Investment in Self-Driving Cars,” Brookings Institution, October 16, 2017.
  • Portions of this section are drawn from Darrell M. West, “Driverless Cars in China, Europe, Japan, Korea, and the United States,” Brookings Institution, September 2016.
  • Yuming Ge, Xiaoman Liu, Libo Tang, and Darrell M. West, “Smart Transportation in China and the United States,” Center for Technology Innovation, Brookings Institution, December 2017.
  • Peter Holley, “Uber Signs Deal to Buy 24,000 Autonomous Vehicles from Volvo,” Washington Post , November 20, 2017.
  • Daisuke Wakabayashi, “Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam,” New York Times , March 19, 2018.
  • Kevin Desouza, Rashmi Krishnamurthy, and Gregory Dawson, “Learning from Public Sector Experimentation with Artificial Intelligence,” TechTank (blog), Brookings Institution, June 23, 2017.
  • Boyd Cohen, “The 10 Smartest Cities in North America,” Fast Company , November 14, 2013.
  • Teena Maddox, “66% of US Cities Are Investing in Smart City Technology,” TechRepublic , November 6, 2017.
  • Osonde Osoba and William Welser IV, “The Risks of Artificial Intelligence to Security and the Future of Work” (Santa Monica, Calif.: RAND Corp., December 2017) (www.rand.org/pubs/perspectives/PE237.html).
  • Ibid., p. 7.
  • Dominic Barton, Jonathan Woetzel, Jeongmin Seong, and Qinzheng Tian, “Artificial Intelligence: Implications for China” (New York: McKinsey Global Institute, April 2017), p. 7.
  • Executive Office of the President, “Preparing for the Future of Artificial Intelligence,” October 2016, pp. 30-31.
  • Elaine Glusac, “As Airbnb Grows, So Do Claims of Discrimination,” New York Times , June 21, 2016.
  • “Joy Buolamwini,” Bloomberg Businessweek , July 3, 2017, p. 80.
  • Mark Purdy and Paul Daugherty, “Why Artificial Intelligence is the Future of Growth,” Accenture, 2016.
  • Jon Valant, “Integrating Charter Schools and Choice-Based Education Systems,” Brown Center Chalkboard blog, Brookings Institution, June 23, 2017.
  • Tucker, “‘A White Mask Worked Better.’”
  • Cliff Kuang, “Can A.I. Be Taught to Explain Itself?” New York Times Magazine , November 21, 2017.
  • Yale Law School Information Society Project, “Governing Machine Learning,” September 2017.
  • Katie Benner, “Airbnb Vows to Fight Racism, But Its Users Can’t Sue to Prompt Fairness,” New York Times , June 19, 2016.
  • Executive Office of the President, “Artificial Intelligence, Automation, and the Economy” and “Preparing for the Future of Artificial Intelligence.”
  • Nancy Scolar, “Facebook’s Next Project: American Inequality,” Politico , February 19, 2018.
  • Darrell M. West, “What Internet Search Data Reveals about Donald Trump’s First Year in Office,” Brookings Institution policy report, January 17, 2018.
  • Ian Buck, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” February 14, 2018.
  • Keith Nakasone, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” March 7, 2018.
  • Greg Brockman, “The Dawn of Artificial Intelligence,” Testimony before U.S. Senate Subcommittee on Space, Science, and Competitiveness, November 30, 2016.
  • Amir Khosrowshahi, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” February 14, 2018.
  • James Kurose, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” March 7, 2018.
  • Stephen Noonoo, “Teachers Can Now Use IBM’s Watson to Search for Free Lesson Plans,” EdSurge , September 13, 2017.
  • Congress.gov, “H.R. 4625 FUTURE of Artificial Intelligence Act of 2017,” December 12, 2017.
  • Elizabeth Zima, “Could New York City’s AI Transparency Bill Be a Model for the Country?” Government Technology , January 4, 2018.
  • Julia Powles, “New York City’s Bold, Flawed Attempt to Make Algorithms Accountable,” New Yorker , December 20, 2017.
  • Sheera Frenkel, “Tech Giants Brace for Europe’s New Data Privacy Rules,” New York Times , January 28, 2018.
  • Claire Miller and Kevin O’Brien, “Germany’s Complicated Relationship with Google Street View,” New York Times , April 23, 2013.
  • Cade Metz, “Artificial Intelligence is Setting Up the Internet for a Huge Clash with Europe,” Wired , July 11, 2016.
  • Eric Siegel, “Predictive Analytics Interview Series: Andrew Burt,” Predictive Analytics Times , June 14, 2017.
  • Oren Etzioni, “How to Regulate Artificial Intelligence,” New York Times , September 1, 2017.
  • “Ethical Considerations in Artificial Intelligence and Autonomous Systems,” unpublished paper. IEEE Global Initiative, 2018.
  • Ritesh Noothigattu, Snehalkumar Gaikwad, Edmond Awad, Sohan Dsouza, Iyad Rahwan, Pradeep Ravikumar, and Ariel Procaccia, “A Voting-Based System for Ethical Decision Making,” Computers and Society , September 20, 2017 (www.media.mit.edu/publications/a-voting-based-system-for-ethical-decision-making/).
  • Miles Brundage, et al., “The Malicious Use of Artificial Intelligence,” University of Oxford unpublished paper, February 2018.
  • John Markoff, “As Artificial Intelligence Evolves, So Does Its Criminal Potential,” New York Times, October 24, 2016, p. B3.
  • Economist , “The Challenger: Technopolitics,” March 17, 2018.
  • Douglas Maughan, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” March 7, 2018.
  • Levi Tillemann and Colin McCormick, “Roadmapping a U.S.-German Agenda for Artificial Intelligence Policy,” New American Foundation, March 2017.

Artificial Intelligence

Governance Studies

Center for Technology Innovation

Artificial Intelligence and Emerging Technology Initiative

August 30, 2024

Cameron F. Kerry

August 29, 2024

Isabella Panico Hernández, Nicol Turner Lee

August 22, 2024

UN logo

Search the United Nations

  • Issue Briefs
  • Commemoration
  • Branding Package
  • Our Common Agenda
  • Press Releases

essay on technological development of modern world

The Impact of Digital Technologies

Technologies can help make our world fairer, more peaceful, and more just. Digital advances can support and accelerate achievement of each of the 17 Sustainable Development Goals – from ending extreme poverty to reducing maternal and infant mortality, promoting sustainable farming and decent work, and achieving universal literacy. But technologies can also threaten privacy, erode security and fuel inequality. They have implications for human rights and human agency. Like generations before, we – governments, businesses and individuals – have a choice to make in how we harness and manage new technologies.

A DIGITAL FUTURE FOR ALL?

Digital technologies have advanced more rapidly than any innovation in our history – reaching around 50 per cent of the developing world’s population in only two decades and transforming societies. By enhancing connectivity, financial inclusion, access to trade and public services, technology can be a great equaliser.

In the health sector, for instance, AI-enabled frontier technologies are helping to save lives, diagnose diseases and extend life expectancy. In education, virtual learning environments and distance learning have opened up programmes to students who would otherwise be excluded. Public services are also becoming more accessible and accountable through blockchain-powered systems, and less bureaucratically burdensome as a result of AI assistance.Big data can also support more responsive and accurate policies and programmes.

However, those yet to be connected remain cut off from the benefits of this new era and remain further behind. Many of the people left behind are women, the elderly, persons with disabilities or from ethnic or linguistic minorities, indigenous groups and residents of poor or remote areas. The pace of connectivity is slowing, even reversing, among some constituencies. For example, globally, the proportion of women using the internet is 12 per cent lower than that of men. While this gap narrowed in most regions between 2013 and 2017, it widened in the least developed countries from 30 per cent to 33 per cent.

The use of algorithms can replicate and even amplify human and systemic bias where they function on the basis of data which is not adequately diverse. Lack of diversity in the technology sector can mean that this challenge is not adequately addressed.

THE FUTURE OF WORK

Throughout history, technological revolutions have changed the labour force: creating new forms and patterns of work, making others obsolete, and leading to wider societal changes. This current wave of change is likely to have profound impacts. For example, the International Labour Organization estimates that the shift to a greener economy could create 24 million new jobs globally by 2030 through the adoption of sustainable practices in the energy sector, the use of electric vehicles and increasing energy efficiency in existing and future buildings.

Meanwhile, reports by groups such as McKinsey suggest that 800 million people could lose their jobs to automation by 2030 , while polls reveal that the majority of all employees worry that they do not have the necessary training or skills to get a well-paid job.

There is broad agreement that managing these trends will require changes in our approach to education, for instance, by placing more emphasis on science, technology, engineering, and maths; by teaching soft skills, and resilience; and by ensuring that people can re-skill and up-skill throughout their lifetimes. Unpaid work, for example childcare and elderly care in the home, will need to be better supported, especially as with the shifting age profile of global populations, the demands on these tasks are likely to increase.

THE FUTURE OF DATA

Today, digital technologies such as data pooling and AI are used to track and diagnose issues in agriculture, health, and the environment, or to perform daily tasks such as navigating traffic or paying a bill. They can be used to defend and exercise human rights – but they can also be used to violate them, for example, by monitoring our movements, purchases, conversations and behaviours. Governments and businesses increasingly have the tools to mine and exploit data for financial and other purposes.

However, personal data would become an asset to a person, if there were a formula for better regulation of personal data ownership. Data-powered technology has the potential to empower individuals, improve human welfare, and promote universal rights, depending on the type of protections put in place.

THE FUTURE OF SOCIAL MEDIA

Social media connects almost half of the entire global population . It enables people to make their voices heard and to talk to people across the world in real time. However, it can also reinforce prejudices and sow discord, by giving hate speech and misinformation a platform, or by amplifying echo chambers.

In this way, social media algorithms can fuel the fragmentation of societies around the world. And yet they also have the potential to do the opposite.

THE FUTURE OF CYBERSPACE

How to manage these developments is the subject of much discussion – nationally and internationally – at a time when geopolitical tensions are on the rise. The UN Secretary-General has warned of a ‘great fracture’ between world powers, each with their own internet and AI strategy, as well as dominant currency, trade and financial rules and contradictory geopolitical and military views. Such a divide could establish a digital Berlin Wall. Increasingly, digital cooperation between states – and a universal cyberspace that reflects global standards for peace and security, human rights and sustainable development – is seen as crucial to ensuring a united world. A ‘global commitment for digital cooperation’ is a key recommendation by the Secretary-General’s High-level Panel on Digital Cooperation .

FOR MORE INFORMATION

The Sustainable Development Goals

The Age of Digital Interdependence: Report of the UN Secretary-General’s High-level Panel on Digital Cooperation

ILO | Global Commission on the Future of Work

Secretary General’s Address to the 74th Session of the UN General Assembly

Secretary General’s Strategy on New Technology

PDF VERSION

Download the pdf version

science-technology

A Journey Through the Evolution: History of Science and Technology

essay on technological development of modern world

The history of science and technology is a fascinatingly complex web of discoveries, inventions, and innovations that have shaped the modern world.

From ancient times to the present day, humans have been on an unceasing quest for knowledge and ways to apply it in order to improve their lives.

This journey has taken us from rudimentary tools made of stone and bone to the complex technologies of today, encompassing everything from medical breakthroughs to space exploration.

Along the way, we’ve encountered numerous successes and failures that have helped us further refine our approach to science and technology.

Through it all, one thing remains true: science and technology play an integral part in our lives and will continue to do so in the future.

Ancient Times

The ancient civilizations of Egypt, Mesopotamia, China, India, and the Americas all had their own unique approaches to science and technology.

Examples of early inventions and discoveries from ancient times

There are numerous examples of early inventions and discoveries from ancient times that have had a lasting impact on our lives. These includes:

  • The wheel – invented by the Sumerians around 3500 BC, it revolutionized transportation and paved the way for other forms of mechanization.
  • Metallurgy – an ancient practice that involved manipulating metals to create useful objects like tools and weapons.
  • Mathematics – an essential part of all scientific progress and discovery, mathematics has been practiced since ancient times.
  • Astronomy – the study of the heavens was a cornerstone of many early cultures, leading to advancements in navigation and time-keeping.
  • Medicine – early civilizations had their own forms of medical treatments, from herbal remedies to surgical procedures.
  • Agriculture – the practice of growing and harvesting crops was essential for sustained civilizations.
  • Writing – the invention of writing systems allowed people to record knowledge and transmit it across generations.
  • Architecture – the practice of designing and constructing buildings has been around since ancient times, with some structures still standing today.

Impact these had on contemporary society

These inventions and discoveries have had a lasting impact on contemporary society.

The wheel, for example, has been integral to transportation since its invention and is still used in a variety of ways today.

Metallurgy enabled the development of metal tools and weapons that were stronger and more durable than those made from stone or bone.

Mathematics gave us the ability to calculate complex equations and understand the physical world around us.

Astronomy was key in developing calendars, maps, and navigation systems that could be used to explore the world.

Medicine allowed us to diagnose and treat diseases more effectively than before, while agriculture enabled reliable sources of sustenance.

Writing gave us a way to record knowledge and share it with others across continents and centuries.

Finally, architecture enabled us to build structures that could house and protect civilizations for centuries to come.

Modern Times

Modern science and technology have advanced exponentially in the last few centuries, leading to breakthroughs that have changed our lives in countless ways.

Examples of technological breakthroughs from the 19th century to the present day

There are numerous examples of technological breakthroughs from the 19th century to the present day. These include:

  • The telegraph – invented in 1844, it revolutionized communication by allowing people to send messages over long distances using electricity.
  • The telephone – Alexander Graham Bell’s invention of the telephone in 1876 transformed communication even further by allowing people to speak to one another from miles apart.
  • The lightbulb – Thomas Edison’s invention of the lightbulb in 1879 changed the way we light our homes and workplaces, leading to increased productivity and safety.
  • Vaccines – Edward Jenner’s 1796 invention of the smallpox vaccine kickstarted a revolution in medical science, with more vaccines being developed for other diseases over the years.
  • The automobile – Karl Benz’s 1886 invention of the automobile changed transportation forever and ushered in an era of personal mobility.
  • Radio – Guglielmo Marconi’s invention of the radio in 1895 revolutionized communication and entertainment.
  • The computer – John Mauchly and J. Presper Eckert’s invention of the first general-purpose electronic digital computer in 1946 opened new doors for data processing and computation.
  • Internet – Vinton Cerf and Bob Kahn’s invention of the internet protocol suite in 1974 enabled worldwide connection and communication.
  • Robotics – the development of robotics over the years has allowed us to automate processes, make factories more efficient, and explore space.
  • Artificial Intelligence – AI technology has become increasingly powerful in recent decades, allowing us to solve complex problems and develop innovative applications.

Impact these have had on contemporary society

These inventions and discoveries have played a major role in transforming contemporary society.

The telegraph, telephone, and radio enabled global communication for the first time.

The lightbulb revolutionized lighting, making it easier to do work in the dark and providing a safer environment at night.

Vaccines prevented people from succumbing to diseases that had previously been untreatable.

The automobile allowed people to travel greater distances than ever before and created an entirely new industry of automotive manufacturing.

The computer revolutionized data processing, allowing us to work faster and more efficiently than ever before.

The internet made it possible for people to connect from anywhere in the world, vastly increasing access to information and communication.

Robotics enabled us to automate processes, making factories and assembly lines more efficient.

AI technology has allowed us to develop applications that can learn and evolve, as well as solve complex problems.

Science and technology continue to shape modern life in countless ways.

Breakthroughs such as the telegraph, telephone, lightbulb, automobile, computer, internet, robotics, and artificial intelligence have revolutionized communication, transportation, lighting, data processing, automation, and problem-solving.

The use of science and technology will continue to be essential in advancing our lives.

As we move forward, new technologies such as renewable energy and biotechnology may have a profound impact on the way we live and work.

In addition, AI and robotics are likely to play an increasingly important role in many areas of our lives, from healthcare to transportation.

Ultimately, science and technology will play a key role in helping us solve the world’s most pressing challenges.

Follow

Growing Up: 10 Tips on How to Be More Mature and Take Responsibility

jupiter

What If Jupiter Became a Star? The Consequences That May Happens

Username or Email Address

Remember Me

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Privacy policy.

To use social login you have to agree with the storage and handling of your data by this website. %privacy_policy%

Add to Collection

Public collection title

Private collection title

No Collections

Here you'll find all collections you've created before.

Hey Friend! Before You Go…

Get the best viral stories straight into your inbox before everyone else!

Email address:

Don't worry, we don't spam

  • Philosophy of Art

Impact of Technology on Modern Society—A Philosophical Analysis of the Formation of Technogenic Environment

  • August 2020
  • Media Watch -(-)
  • This person is not on ResearchGate, or hasn't claimed this research yet.

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

Ija Suntana

  • Iu Rusliana

Chay Asdak

  • Lubna Gazalba
  • Andrii Peremetchyk
  • Olga Kulikovska
  • Nataliia Shvaher
  • Vladko Panayotov
  • Anna I. Demina

Rajendra Dash

  • HUSSEIN MOHAMED ABDI
  • GLOBAL ENVIRON CHANG

Jean-Francois Mercure

  • Kate Crawford
  • SPACE POLICY

Senthil Kumar

  • R. V. Patyukova
  • A. N. Minskaya
  • V. A. Sergienko
  • E. V. Tarasenko

Brian J. L. Berry

  • Daniel Bell
  • Friedrich Rapp
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Modern (1940’s-present)

Juliana-Marie Troyan; Maggie Elpers; Taylor Lorusso; Sevanna Boleman; Willis Watts; Joseph Rivera; David Jonah Lamothe; and Anthony Spearman

Introduction

In the study of science and technology in society, the modern world, spanning from the 1940s to the present day, is an overwhelming, yet enriching period to study. Although the 1940s and the 2020s are both considered modern, the average person today would most likely find himself or herself living a very different life in the 1940s. Those differences would regard heavily debated subjects, such as societal views on medical care and defense, as well as daily subjects of entertainment and communication. With changing societal views, science and technology have progressed to satisfy society’s needs and wants. These changes in society brought about by advances in medical care, defense, cybersecurity, entertainment, and communication define our lives today. The modern era has seen and will continue to see extensive changes in society that are driven by politics, religion, and essential events calling for significant developments in science and technology, which has given society the life it gladly accepts today. By the end of this chapter you will be able to understand magnitude and presence of science and technology in the modern era. These ideas will be expanded upon in the following sections, starting with the question: What has science looked like in the modern world?

In the 21st century alone, scientists have been able to detect  gravitational waves  on the moon, sequence the  genome  of a cancer patient, and create human organs using stem cells (“ 10 Greatest Scientific Discoveries and Inventions of 21st Century | ISB Glasgow, ” n.d.). However, perhaps one of the most influential discoveries in the scientific community was the ability to see particles at the  atomic  and  molecular  level. Thus, the field of nanoscience was born, and ever since, there has been an influx of scientific developments that have been translated into technology directly affecting human life. The discovery of nanoscience has led to advances in the fields of computing and engineering, which has the potential to change the gap of accessible healthcare technology between  socio-economic  classes.

Magnetic Nanoparticles for Clinical Diagnostics and Therapy

The word “nano” stems from a Greek origin meaning dwarf, which proves to be applicable when measuring particles that are one billionth of a meter. One of the original scientists to use the term nanotechnology described the concept as having a goal to manipulate single atoms and molecules for the production of macroscale products (Bardosova & Wagner, 2013). In the early 1980s, two scientists at IBM Research in Zurich developed the Scanning Tunneling Microscope (STM), which allowed materials to be imaged and manipulated at the atomic level (Baird & Shew, 2004). This allowed scientists and researchers to see smaller structures than ever before, and since then, a wide variety of fields have been impacted. In the US specifically, the establishment of the National Nanotechnology Initiative (NNI) federally funded by institutions such as the National Science Foundation, the Department of Defense, and the National Institutes of Health, has created a push for innovation in the areas of physical, chemical, biological, and materials engineering ( Roco , 2003). Nanoscience and nanotechnology is a rapidly growing field in the modern period of engineering, physics, and computing.

Currently, some of the significant applications of nanoscience are being used in the Biomedical and Biological Engineering fields for a wide range of applications, including disease therapies, vaccines, and even personalized medicine. Emerging as a subfield of Biomedical Engineering, the research area of drug delivery has readily adopted the use of nanoscience. Through the use of nanoparticles ranging in the size of 10 to 1000 nanometers in diameter, researchers can deliver therapeutics such as pharmaceuticals, proteins, and even RNA encapsulated in nanoparticles for the treatment of many diseases including cancer and cardiovascular diseases. The term nanoparticle is perhaps quite vague, as nanoparticles can be synthesized using polymers,  peptides , and  lipids , as well as other synthetic and biological materials. Nanoparticles are advantageous as drug delivery vehicles because they can be readily taken up by cells, they provide a steady release of drugs, and targeting  moieties  can be incorporated to help nanoparticles deliver their cargo at a specific site in the body ( Sahoo & Labhasetwar , 2003). Nanoparticles can be  conjugated  with cell-specific ligands that will carry the nanoparticle to where the matching receptor is overexpressed. For example, a nanoparticle could be tagged with a specific motif that would bind to cancer cells overexpressing a particular receptor on the cell membrane, and it would not be targeted to healthy tissue to avoid common side effects of  chemotherapeutic  drugs.

One of the newer advances in nanoscience and healthcare is the field of personalized medicine. Usually, when a patient is diagnosed with a disease, there is one pharmaceutical or treatment for the disease, and each patient diagnosed with the said disease is given the same treatment. However, with the genetic testing that is now available, scientists can predict which drugs will be more beneficial for individual patients and tailor effective patient therapies towards smaller populations with different genetic profiles (Vogenberg et al., 2010). Another avenue of personalized medicine therapies comes from stem cells. Stem cells are characterized by their ability to grow into multiple types of cells, and in the early 2000’s it was discovered that basic cell types could be reprogrammed into induced  pluripotent  stem cells (iPSCs) that are capable of forming functional tissue-specific cells ( Chun et al. , 2011). For example, a patient’s stem cells could be collected, reprogrammed in a laboratory to grow into a different cell type, and implanted back into the human body to treat a disease or injury. This method is advantageous because it limits the adverse side effects that come from introducing foreign materials into the human body.

It seems as if the field of nanoscience and  nanotechnology  is the future of modern medicine, but it begs the question, can this help everyone? The development of personalized medicine could be widening an already significant gap in access to health care between socio-economic classes. Take, for example, a developing country that does not have the infrastructure or essential utilities to support modern laboratories or patients with enough income to pay for personalized therapies that are doubtfully cost-effective. Do these patients have the same access to therapies compared to patients living in first world countries with research institutes that receive billions of dollars in funding each year? Some scientists argue that personalized medicine has benefits in eliminating health  disparities , such as developing targeted therapies for certain ethnic groups that share common disease characteristics (Brothers & Rothstein, 2015). However, it would be naïve to say that patients in developed countries will not benefit more from personalized medicine than developing countries.

Nanoscience is the future of many disciplines with the ability and potential to affect human life on a large scale. Currently, the globe is experiencing a boom in the use of nanoscience that has the potential to cure incurable diseases and provide better healthcare to developing countries. Whether or not these technologies will be accessible to all is an issue that will undoubtedly be impacted by figures outside the scientific community, such as legislation and national regulations, as more and more technologies arise from nanoscience.

In addition to nanotechnology, 3D Printing is another modern world technology that is continually evolving to encompass a wide range of applications, especially in the medical industry, allowing medical professionals, researchers, and educators alike the opportunity to improve and advance procedures and technology like never before. 3D printing, also known as additive manufacturing, takes a digital model or blueprint of an object of the user’s creation and prints successive layers of material to create a tangible 3D model of that object ( Nawrat , 2018). Compared to other manufacturing methods that subtract material to form a product, additive manufacturing is the addition of material. The initial goal of 3D printing was to develop faster  prototyping   but it has developed into so much more in today’s world. Medical researchers and specialists are using 3D printing to create artificial organs with bio-printers to validate proper drug dosages and practice complicated surgical operations in a cost-effective manner ( Nawrat , 2018). Universities and some primary education schools are implementing 3D printers as a resource for school projects/coursework, prototyping, and learning. Additionally, 3D printing is being used to manufacture custom products, such as  prostheses , at a relatively low cost. Traditional methods to get the same result, such as injection molding, takes weeks to make and costs thousands of dollars. 3D printing allows companies to build their objects remotely and on a case-by-case basis.

3D printer printing: Anycubic I3 Mega 3D Drucker

One of the most impressive technological advancements in recent years regarding 3D printing in the medical industry is bioprinting. Rather than printing plastics or metals, bioprinters use a computer-guided  pipette  to layer living cells and create artificial living tissues ( Nawrat , 2018). Some of the most common materials used in this type of 3D printing include cell  aggregates  – such as tissue spheroids, cell pellets, and tissue strands – hydrogels, micro-carriers, and decellularized matrix components ( Peng , 2017). Bioprinting has expanded to encompass a wide range of applications in medicine such as pharmaceutical drug discovery, creation of artificial organs called “organoids” for elaborate surgery preparation or organ replacements, and giving medical students more real-world experience without the added patient risk. One of the biggest challenges during new drug discovery is that there are so many strict regulatory and validation requirements that need to be met in order to ensure their safety and  efficacy  for public use. As can be assumed, this is a very tedious and taxing process and unfortunately, leads to inflated  attrition  rates and significant losses in funding ( Peng , 2017). Traditional cell-based studies use 2D monolayer culture methods; however, this is not realistic for drugs that will be implemented in a 3D environment. The implementation of bioprinting allows for more predictive methods of efficacy and safety analysis, decreasing the attrition rates and enabling a “quick-win, fast-fail” mentality, saving time and money, and increasing the amount of drug discovery.

In addition to advancing medical experiments and procedures, 3D printing can also be used to develop more precise surgical instruments and tools for other medical practices in a cost-effective manner. In environments such as hospitals and medical centers, it is very easy for bacteria and infection to spread. However, tools made using 3D printing can be “one-off” or even made for specific tasks on a case-by-case basis, improving procedures, and eliminating the risk of transferring bacteria or viruses. Now, it can be argued that having to purchase or “print” new tools for each surgery would add up in cost. However, by creating the perfect tools for the job for a lot less than they would cost using other manufacturing methods while still getting the added customization benefits would outweigh the additional 3D printing material expenses ( 3D-printed Surgical Tools  n.d.). As this technology continues to increase in popularity, economists estimate that the 3D printing industry will be worth $3.5 billion by 2025 in the medical field alone ( Nawrat , 2018).

3D printing is further expanding its reach in the medical industry by solving many issues and restrictions that arise in this field. Prosthetics are one of the most popular yet most expensive medical devices for amputees used globally today. There are many advantages to incorporating 3D printing into prosthetic development. Firstly, it can be difficult and expensive to produce prosthetics that are the perfect fit for a patient. Every patient’s needs are going to be different. With 3D printing, prosthetics can be modeled and printed at a much lower cost to ensure a proper fit for every patient ( Top 5 Applications , 2019). Children who are candidates for prosthetics typically will not be able to get a quality prosthetic until they are fully grown, but 3D printing technology allows for new prosthetics to be printed every couple of months to keep up with them as they continue to grow without the extreme financial burden. Third world countries who may not even have prosthetics as an option can take advantage of the 3D printed ones ( Top 5 Applications , 2019). Another application of 3D printing in the medical industry is bioprinting. Rather than printing plastics or metals, bioprinters use a computer-guided pipette to layer living cells and create artificial living tissues ( Nawrat , 2018). These “organoids”, or artificial organs, can be used for pharmaceutical testing as a cost-effective and ethical means of helping to identify the side effects of drugs and validating safe dosage amounts ( Top 5 Applications , 2018). Surgeons can also use these organoids and create patient-specific organ replicas to practice before performing the actual operation. This method has been proven to speed up procedures and minimize trauma ( Nawrat , 2018).

The events of the 2016 presidential election revealed the power that  social media  technology possesses in society through its influence of public opinion using   algorithms  that can influence what a social media user sees. These algorithms can cause news-feed echo chambers, dark posts, and bots. Social media has only recently become politically relevant due to its beginnings coming in the early 2000s. Social media sites use news-feed algorithms to order posts that appear on a user’s feed ( Barnhart,  2019). Social media has woven itself into many parts of our lives– sometimes in unexpected ways. One crucial part of our society that has been changed forever by social media is politics. A particularly profound demonstration of the relationship between politics and social media took place during the 2016 presidential election.

Social Networks

One potential effect of the use of news feed algorithms by social media sites is the creation of echo chambers. An echo chamber is a term used to describe a situation in which the information and opinions an individual is exposed to predominately align with their own beliefs ( Digital Media Literacy , 2019). This situation can occur in any circumstance where information flows, but social media algorithms have allowed for the creation of a specific type of echo chamber called a filter bubble. As stated previously, social media algorithms are used to determine how posts appear on a user’s feed. These algorithms often take what types of people and pages an individual interacts with into account ( Digital Media Literacy , 2019). For example, if a user tends to click on pages about dogs, then posts about dogs will appear at the top of their news feed more often. Therefore, users are often exposed to content that is synonymous with what they interact in the first place. In terms of politics, this can prevent users from being exposed to different viewpoints than their own, and it can further polarize views. There has been a trend in American politics in which those on opposite poles of political standpoints are feeling increasingly negative toward each other ( Allcott, & Gentzkow , 2017). While there is no conclusive evidence that echo chambers affected the outcome of the 2016 presidential election, studies have outlined that news feed algorithms may  exacerbate  political tensions between those on the opposite ends of political views.

Not only are echo chambers a potential problem lurking in the social media and political crosshairs, but “dark posts” should also be considered. Dark posts are advertisements that are only visible to specific users on social media. These types of advertisements are a part of the ad-algorithms for several social media sites. For example, a dark post aimed towards college students would only be visible to college students. Dark posts go a step beyond just a targeted advertisement. These advertisements can use keywords like your actual job title ( Gollin , 2018). In short, dark posts allow customization of the targeted advertisement strategy. In the 2016 election, it was found that a Russian organization bought thousands of ads on several social media platforms, such as Facebook, focusing on political and social issues that actively targeted certain  demographic  groups ( Young et al. , 2018). Facebook testified before the United States Senate that 126 million Americans were subjected to advertisements and posts, including some dark posts, created by Russians ( Gollin , 2018). Social media platforms are making changes to the dark post regulations in order to foster more  transparency  and prevent an issue like this from recurring.

Another issue surrounding social media use in the 2016 presidential election is the use of bots on social media. Bots are social media automated accounts that use algorithms to interact with other users ( Tarantola , 2019). It can often be challenging to differentiate between a bot and a human. Only about 47% of Americans stated that they could recognize a bot account ( Tarantola , 2019). Bots can be used for political strategies as well. These so-called political bots can  disseminate  news information, post spam, and harass other users. Bots were particularly rampant in the 2016 presidential election, with higher levels of bot use than ever before ( Kollyani et al ., 2016). These bots accounted for a large amount of the political content generated and discussed during this election. Since bots are also often used to spread news, they can be used to spread misinformation as well. Since this problem has risen, social media platforms are taking measures to try and reduce the power and prevalence of bots.

Echo chambers, dark posts, and bots had relevance in the events of the 2016 presidential election. They helped reveal the power that social media technology possesses in society through its influence of public opinion using news-feed algorithms. Social media has been beneficial in allowing increased access to information by which people can form their political identities. Unfortunately, the same characteristics that foster the benefits of social media may also provide an opportunity for disinformation to be spread. In an age where social media algorithms value engagement over credibility, it is essential to be aware of the issues that may manipulate public opinion.

Many factors drive the advancements of science and technology in society today. Currently, resources, knowledge, prosperity, and ambition are influences in the decisions to create, investigate, and look for answers and solutions to the problems in the world. One controversial technology has been used in the field of medicine. The scientific breakthrough of new reproductive technology (NRT) has been used to treat  infertility  around the world and is rapidly spreading. Many religions have various viewpoints on NRT.  Christianity,  which is composed of  Catholics  and  Protestants,  is the most prominent religion in North America. It has affected new reproductive technology in three different ways: supporting the means to overcome infertility, encouraging more research in the field, and discouraging its future use.

In vitro fertilization

Many Christians believe that NRT is a means to overcome fertility by giving couples who cannot conceive naturally the opportunity to conceive with this medical advancement. According to the Centers for Disease Control and Prevention, infertility is defined as “not being able to get pregnant (conceive) after one year (or longer) of unprotected sex” ( Becker , 2019). From 2015 to 2017, nineteen percent of women 15-49 struggle with infertility, so many turn to new reproductive therapy, which includes  in vitro fertilization  (IVF) and  intracytoplasmic sperm injection  (ICSI). In IVF, fertilization occurs outside the female’s body. In ICSI, the doctor injects sperm into the female’s mature egg. Also, in some cases, donor eggs and sperm are used. Protestants, in particular, feel that babies should be celebrated as life is celebrated in the Bible, the sacred text of Christians. Others, like Catrece Caron, believe that “God created these doctors to do this kind of work.” Caron, according to the Washington Post, was a forty-one-year-old mother who had a seven-year-old and a two-year-old through IVF in Boston, where over 90,000 babies have been born as a result of their medical advancements. Another mother, named Lesley Brown, said her daughter, “Louise is truly a gift from God” ( Smith , 2018). Boston is just one city in the United States, where babies were celebrated as gifts from God through NRT.

Protestants have encouraged more research in the medical field concerning infertility and NRT. For example, George Church, a Harvard geneticist, is now beginning a project called the Human Genome Project, which maps the  DNA  in humans. He said he believes Protestants are beginning to encourage this new research as they now have support for IVF from Protestants. Church says, “In the Bible, it says we are given dominion over the Earth. Inventing newer and newer advanced technologies is almost a key component of human nature”( Cha,  2018). Scientists feel they are gaining more support from the Protestants and, in turn, are encouraged to research more into the medical field of infertility through science and technology.

As many Protestants support and encourage the science and technology of the Christian faith, Catholicism discourages its future use. The Catholic Church believes NRT is immoral and illegal. They strongly disapprove of the research and use of NRT. A report by the Catholic Church entitled Respect for Human Life and Dignity of Procreation states, “Children are a gift and a blessing from God and that although science makes some things possible, it does not make them right. Research must continue in the causes of infertility, but the morality of these should be carefully considered.” ( Sallam , 2016) The percent of the World’s population, so they are very influential with their beliefs. The Catholic Church discourages future use and research in the medical advancements of infertility.

Science and technology are often affected by religion. In the 2016 article, Religious Aspects of Assisted Reproduction, it says, “Human response to new developments regarding birth…is largely shaped by religious beliefs. When assisted reproduction was introduced into medical practice in the last quarter of the twentieth century, it was fiercely attacked by some religious groups and highly welcomed by others” ( Sallam , 2016). Christianity, which is widely practiced in North America, has impacted the medical advancements in new reproductive technology. While some support the means and encourage more research, others actively discourage its use. Either way, religion has influenced NRT; therefore, affecting today’s society as a whole.

Key Events & Innovations

The modern era saw the rise of technologies such as  reproductive technology , the  television , man-made  satellites ,  personal computers , and many more. However, there is a specific event that triggered significant technological advancements in medicine and society. With the end of the  Second World War  and the rise of  nuclear  weapons, tensions between the Soviet Union and the United States of America rose, which gave rise to the Cold War. This Cold War led to an arms race and growth of nuclear weapons between the Soviets and Americans; these tensions also spread into space. When the Soviets launched the world’s first artificial satellite into space, the United States felt the pressure building, and the Space Race began between the two major powers. During the Space Race, the United States not only landed on the Moon, but they were able to develop groundbreaking technological advancements.

Launch of Apollo 11

A quick recap: Germany’s instability after the  First World War  led to the rise of Adolf Hilter and his  Nazi  Party. Hitler anointed himself as the supreme leader of Germany in 1934. He and his National Socialist Party broke the Versailles Treaty, which was a peace treaty that ended World War I, by rearming Germany and its military. Later, Hitler and Joseph Stalin, the dictator of the Soviet Union at the time, signed a  pact , and the Soviet Union and Germany invaded Poland from the East and West. Great Britain and France, who have promised military support to Poland if ever invaded, declared war on Germany, which ignited the start of WWII. In 1940, Germany, Italy, and Japan signed the Tripartite Pact, which became the Axis alliance. Later in German-occupied Poland, more than 6 million  Jews  would be murdered during the Holocaust, mass  genocide  of European Jews from 1941-1945. On December 7, 1941, Japanese aircrafts attacked a major United States Naval Base in Pearl Harbor in Hawaii, which led to the United States entering the Second World War against the Axis alliance. On June 6, 1944, also known as “D-Day,” the Allies began the invasion of Europe by landing troops on the beaches of Normandy, France, which signified German defeat. In order to finalize the war with the remaining Axis power, Japan, the United States crafted some nuclear weapons, called  atomic bombs , that would later be dropped on Hiroshima and Nagasaki, Japan. The war promptly ended as the Japanese agreed upon the terms of the Potsdam Declaration, a statement that called upon the surrender of the Japanese armed forces (Gilbert, 2014).

Taking a step back before the Second World War was over, the Allied powers held a conference to decide on how to carry on after the war. During this conference, tensions grew between President Harry S. Truman and dictator Joseph Stalin because they were both suspicious of each other’s intentions, especially since Truman made Stalin aware that the United States had created nuclear weapons through their discreet program called the Manhattan Project. These rising tensions helped give birth to a tension-driven period called the Cold War. Over the next few years, after World War II ended, the Soviet Union began experimenting with nuclear weapons. By the 1950s, the two superpowers, the Soviet Union and the United States of America, grew their nuclear arsenal to the point where they could destroy each other (Oreskes & Krige, 2014; McDougall, 2008). With the stress of nuclear weapons and the spread of  communism  breathing down America’s shoulders, the United States started to feel the pressure. The competition between the Americans and the Soviets did not stop on the land, air, and sea; the competition extended to the final frontier: space. The two superpowers explored beyond our world to see how it could benefit their cause, and on October 4, 1957, the Soviet Union launched the first man-made satellite, Sputnik I, into Earth’s orbit. The United States felt this achievement was an immediate threat as the  ballistic missile , Soviet R-7, that launched Sputnik could potentially drop a nuclear missile onto American soil. In 1958, the United States created the National Aeronautics and Space Administration, better known as NASA, and the Space Race began. In 1959, the Soviets launched a  space probe  that crashed into the Moon. In 1961, the Soviets sent the first man to space, Yuri Gagarin, and he orbited around the Earth. In response to all of the Soviet’s achievements, President John F. Kennedy pledged to have America land on the Moon before the Russians. After a few Apollo missions to space, the Apollo 11 mission began on July 16, 1969; this mission was the first lunar landing attempt. On July 20, 1969, Neil Armstrong, a U.S. astronaut, is the first man to step on the Moon. The Space Race ended as the Americans were the first to land on the Moon ( Getchell , n.d.). Landing on the Moon gave Americans hope for a prosperous future as The New York Times reporter states in January 1960, “I can picture a flourishing civilization on the moon twenty or thirty years (after landing on the moon)” (Levitt, 1960).

Sputnik-1

The Space Race led to significant technological achievements; some inventions are still used today, while some were modified over several years to create the technology we encounter every day in our lives. NASA created scratch-resistant astronaut helmets; in 1983, NASA licensed the scratch-resistant technology to the Foster-Grant Corporation. Foster-Grant combined their ten years of research and NASA’s technology to create a scratch-resistant plastic material that would surpass glass under normal wear. Most spectacle lenses that we use today are made of plastic instead of glass ( Bryan , 2016). Also, modern firefighting equipment is derived from spacesuit material and equipment. The materials used in spacesuits make for good flame-retardance and heat-resistance. Also, firefighters breathing systems are modeled after astronaut life support systems. This technology not only helps save the lives of fire accident victims but also helps the hero firefighters stay alive also. A technology also used to aid in reducing or suppressing accidental fires are adjustable smoke detectors that were created by modifying original smoke detectors. NASA and the Honeywell Corporation worked together to develop a smoke detector that could have its sensitivity adjusted in order to prevent false smoke alarms. Also, although commonly thought that duct tape was created by NASA for its space operations during the Cold War, duct tape was actually invented in the 1940s during World War II, and it functioned as medical tape.

Additionally, CAT scans and MRI’s , which are minimally invasive medical scans used to investigate a person’s tissues, bones, and organs, use technology like NASA’s digital signal technology. During the Apollo missions, NASA used this technology to recreate images of the Moon (“ 20 Inventions We Wouldn’t Have Without Space Travel “, n.d.). Also, NASA used SPOC, a navigation monitoring computer. This computer was a modification of a commercial computer called GRiD Compass, and it was chosen for Space missions because of its compact size, large storage capacity, and high processing speed. A significant modification to allow for portability was the addition of a fan to cool the computer. This modification propelled the portable computer market ( Haggerty,  1985). As we have seen, the Space Race has led NASA to collaborate with others in creating technology that we can appreciate in our daily lives. As The New York Times reporter states in April 1985, “the development of military hardware has often enriched science and technology, and the trend is certain to continue” (Browne, 1985).

The technology was developed due to the urgency of the Space Race and the tensions from the Cold War. After Sputnik I orbited Earth for the first time, the United States decided to create the space program, NASA, to participate in the space race, which led the United States to land on the moon and achieve other goals and innovations. One main technological innovation that has had a significant impact on society is the sending of satellites into space. These satellites enable our modern world to be connected almost instantly through the use of broadband internet. The introduction of satellites and broadband internet has created a platform that allows people to collaborate and learn in a faster and more convenient way than ever before. Today, we use portable computers to surf the internet, but this would not be possible without the modifications of the SPOC navigation monitoring computer used in NASA space missions. This innovation in portable computers and other innovations have driven society through the modern era.

The word “computer” has been in use since the early 1600s when it described a person, rather than a machine, who performed computations. This definition stayed the same until the 19th century when the  industrial revolution  saw the invention of machines with the primary purpose of performing calculations. Since then, computers have come a long way, allowing us to access vast amounts of information, stay in contact with people across the globe, and they have even helped send people to the moon. The computer is one of the most significant technological inventions of modern times. Its creation has changed the way society operates in professional and even personal settings. They have also expanded the overall knowledge of the human race through their computational power and the ease with which they can share information.

IBM PC 5150

Charles Babbage conceptualized the first computer in 1822. It was called the Difference Engine, and it was designed to compute several sets of numbers and print out the results. However, because of funding, a full-scale version was never completed. The first fully functional, programmable modern computer did not come until 1938. The Z1, created by German Konrad Zuse, was created in its inventor’s parents’ living room and is considered the first  electromechanical binary  programmable computer. It took eight years after that for the completion of what we consider to be the first “modern” computer. The ENIAC was invented by J. Presper Eckert and John Mauchly at the University of Pennsylvania. It weighed almost 50 tons and is the first example of a fully functional digital computer. Since then, computers have come a long way. Modern versions are exponentially smaller and more powerful. Most people have a computer light enough to fit in their backpack, but with over one thousand times the processing power of the ENIAC. Computers today continue to get smaller and lighter while increasing processing power at a breakneck speed. Where the ENIAC was not even able to store its programming commands, computers today often have dozens of  gigabytes  of memory; some even have  terabytes  of space. Initially, many people were uncomfortable around computers. The term “computerphobia” was used frequently in the ’80s to describe people who held this anxiety towards them, with many publications even offering tips for how to treat it. The term “computerphobia” retained its popularity until the ’90s when people’s technological fears turned from computers to the internet itself.

The development of the internet is arguably the most significant advancement in the history of computers. Computing power and storage used to be limited to the single computer it was found on, which limited their usefulness. During the 1970s and 1980s, small networks started to pop up, limited to a single university computer science department or a business. It was not until 1990 when the first recognizable form of the internet as we know it was invented by computer scientist Tim Berners-Lee. The falling cost of disk space meant that system administrators could set aside vast amounts of storage to host data that could be shared globally in conjunction with the internet. This allowed vast amounts of knowledge to be spread throughout the world, along with software being developed at a much faster pace since collaboration became significantly more comfortable. Nowadays, the internet is a part of all of our lives. We use it to keep in touch with friends across the globe, along with its usage for standard day to day entertainment.

Computers have entirely changed the way our society works. One significant result of their creation and popularity has been some economies shifting from manufacturing to service jobs. Completely new job categories have had to be created to service and implement computer technologies. The networking ability of computers has also allowed businesses to relocate to more remote locations than before. Information processing tasks like payroll and record management can now be easily automated by a computer when they used to require hours of work by a person or group of people. In the field of weather forecasting, our current understanding of weather is almost entirely dependent upon computational models. Biological research now starts with a predictive model that helps determine what to explore in the real world. The computational power of computers has completely changed the way we approach tasks in society.

Due to computers, humans have been able to make accomplishments that our  predecessors  would never have even dreamed of. Their power and versatility have allowed us to map out some of the deepest parts of the ocean while also helping take us to the moon itself. Then, the invention of the internet unlocked almost infinite possibilities for their use. Distance has become almost meaningless when we can communicate nearly instantly, and vast amounts of knowledge that used to be confined to one location can be accessed by people all over the globe simultaneously. Without them, we would not be anywhere near as advanced of society as we are today

Human beings have always been naturally curious about the world that surrounds them. As homo sapiens began to evolve and their intelligence increased, they broke out of their original habitats to explore the surrounding world. After years of evolution and exploration, homo sapiens covered the world, establishing societies on every corner of the planet. As time continued, these societies advanced, and homo sapiens continued to explore the untouched peaks and valleys of the earth. These amazingly intelligent explorers had explored the world, but soon they turned their eyes to exploring the hidden heavens of the sky. In the early 1600s, the first telescope was invented to look at the heavens, and human society has been entranced with the idea of space ever since. A Soviet scientist finally achieved the ability to travel into space and the idea of space exploration by the name of Konstantin Tsiolkovsky in 1903. Tsiolkovsky solved his famous rocket equation that was able to calculate the accurate fuel to weight ratio that was needed to successfully propel a rocket outside of the earth’s atmosphere into the depths of space. As time went on, humans were able to integrate this equation into more advanced and different technologies. These advancements fueled the creation of many different modern technologies that are used in everyday life of society now. Along with the advances in modern society’s technology, space exploration has advanced in many ways as well. Space exploration has evolved from being a government-owned entity that could send small satellites into orbit to being able to build an International Space Station that humans can live in for an extended period. Even public companies like SpaceX and Boeing are joining the coalition of space exploration. This ability of society to explore the depths of space has led to an unbelievable breakthrough in the advancements of modern technology and scientific studies, that many people do not realize, has allowed for the lives that society is living today.

The study and exploration of space have had a drastic effect on modern life down on earth. Many technologies that are used in everyday western life would not even be possible without the idea of space travel. This is because space exploration needed advancements in technology to allow humans to travel further and stay in space longer. These advancements made through space exploration include areas of modern life such as health and medicine, transportation, public safety, consumer goods, and environmental resources. In the field of health and medicine, space exploration has helped humans understand the effect of  zero gravity  on the human body. In turn, this understanding has helped develop better health practices back under the weight of gravity, allowing humans to be healthier than the times before space exploration.

Along with helping understand the human body better, NASA funded many studies on creating artificial limbs and muscles that, in the past few years, have begun to translate over to basic medicine. This has allowed humans that have had a limb amputated to be replaced with advanced robotic limbs. Another impactful technology created by NASA to help study deep space was the MRI machine that was later released to the public and is now used every day, saving people’s lives. In the field of transportation, NASA funded a study done by Goodyear to develop a rubber that would be strong enough to help land a rover on Mars. This rubber created to land on Mars was then released later to the public to produce more durable tires on cars, reducing the amount of tire blowout and making daily travel safer.

Along with helping create safer tires, space exploration has also helped drive improvements in the field of public safety. One of the main improvements in public safety that space exploration has helped drive is in the field of video enhancements and analysis programs. NASA developed computer programs to help produce better quality videos and to be able to analyze videos frame by frame to help study deep space better. The public now uses this technology to help assist law enforcement in producing quality video footage of any crimes allowing for a safer public. For the modern consumer, NASA helped fund many different products that are used in human’s everyday lives. One of these products, patented in 2000 and released to the public in 2005, is the Bowflex workout machine. This project was funded by NASA to help reduce atrophy of muscle mass and loss of bone density of astronauts that spend long periods in space by allowing them to do resistant style workouts in zero gravity. Five years after its creation, NASA released the designs to the public, and the item became a huge hit. Finally, space exploration helped develop some of the most important technology that is being rapidly adopted by many humans around the world, environmental resources technologies. These technologies include many things, such as solar panel cells, water purification systems, and pollution control technologies. Even without knowing it, many humans use technology that was created for space exploration, and, as time continues, the exploration of space will continue to drive our civilization into a more technologically advanced society.

SpaceX Facility and Test Center

As space exploration is becoming more widespread and shared, the idea that some government entities must do it is no more. Public companies like SpaceX and Boeing are breaking into the market of building rockets and allowing space travel to the public. The CEO of SpaceX, Elon Musk, has the dream of public space travel along with the colonization of Mars and hopefully, in the distant future other planets or solar systems. With these dreams to travel further and colonize other planets, science and technology are bound to see huge advancements in the future. Some of these advancements may include technologies that allow for a faster form of travel or the ability to terraform a planet to allow it to sustain life. With the more in-depth travel and study of space, science is also bound to advance. These advancements could come by way of understanding the vast quantity of material that humans have been able to measure yet have not had the ability to understand or even the possibility of finding life somewhere else in the universe. Only the future holds these advancements, and space exploration is the driving force.

STS in the MODERN world

The study of science and technology in society underscores the idea that society and its needs drive the progression of technology (Feenberg, 2012). Furthermore, society chooses the technologies it will accept, allowing them to succeed, and the technologies it will reject, causing their failure and often their obsolescence. Innovations in space exploration, detailed above, developed because of the societal need for advanced defense, communication, and research. The technology that developed from those needs was determined by society to solve their problems while offering little to no disadvantages, so society accepted space exploration and technology. Societal needs, however, change with time. A highly anticipated technology designed to satisfy the needs of modern society was  Google  Glass, depicted in the figure below. Although the technology was successful and useful, it was discontinued only one year after it was released due to society’s apprehension towards its possible applications ( Donnell , 2018). The following detailed dive into the failure of Google Glass is useful in emphasizing society’s influence on innovation as it pertains to the modern era of technology. Specifically, in the modern era, privacy concerns dictate the new technologies society will accept, which is evident in the failure of Google Glass.

Google Glass with frame

As mentioned in the paragraph above, although it eventually failed, Google Glass Edition 1 was created to solve societal issues of the modern era. The goal of these glasses was to act as a hands-free smartphone. It would allow the user to access the internet, camera, maps, calendar, apps, and other smartphone features solely through voice and motion commands ( Pogue , 2013). These functions would not only allow convenience, but they would promote modern concerns, such as hands-free driving. This product would also keep pedestrians safe by preventing phone use and walking near busy streets or on crowded sidewalks. Although society and its needs drove the invention of Google Glass Edition 1, as it does most other technologies, this product did not take into consideration other societal needs that would eventually lead to its failure: privacy concerns.

Modern ideals and morals of society, specifically about privacy concerns, lead to the downfall of Google Glass. Invasion of privacy in the modern world is defined as “the unauthorized collection, disclosure, or other use of personal information as a direct result of electronic commerce transactions” (Wang et al., 1998). Market concerns with Google Glass regarded not only personal information, however, but also the information of bystanders. Since the technology utilized a camera mounted to a person’s head, it recorded not only the wearer’s voice but also their surroundings. This capability meant that anyone could be subject to recording or streaming at any time.

Although these glasses intended to solve societal issues, society soon realized that the wearer indeed determined the purpose of the glasses. A user of Google Glass could use the technology to record movies in theaters illegally or cheat in casinos ( Davis , 2014;  Doyle , 2016). They could also discreetly take or stream photos and videos of individuals who never consented to be objectified in such away. Furthermore, even if the wearer was not using the glasses maliciously, no one knew what Google was doing with the data, photos, and videos it collected ( Essers , 2013). Because of these privacy concerns, Glass wearers were barred from many restaurants and bars to ensure the guests of such establishments felt safe and protected ( Davis , 2014;  Weidner , 2020). As mentioned above, society dictates which technologies fail and succeed, and society’s concerns about Google Glass, and its actions in defense of those concerns, eventually caused the product’s failure.

In the wake of Google Glass’s failure, Google learned from society and compromised with it in order to use the same technology in an accepted manner. Google quickly reacted to the failure of Glass by discontinuing the product in 2015, which was only one year after its release ( Donnell , 2018). They realized, however, that although society did not accept Google Glass, the technology was sound, innovative, and useful. Google announced in 2019 that they are now developing Google Glass Enterprise Edition 2, which is geared toward business professionals and industry (“ Glass – Glass, ” n.d.). This new use is an attempt to make the technology successful by compensating for societal concern. Bystanders will no longer be subjected to unwanted filming from Google as the new product will be in private offices and factories rather than on public streets. This reintroduction of Google Glass to society in a different way with a different user might allow for its success, which underscores the idea that society dictates accepted technologies.

This case study of Google Glass is useful in discussing the idea that society chooses whether new technologies are accepted or rejected. Google has recognized this dependence on society for success, and they are working to compromise with societal privacy concerns to develop a successful technology. Although society rejected Google Glass Edition 1 based on privacy concerns, societal concerns of health and safety called for the innovation’s development. If the reintroduction is successful, Google Glass Edition 2 will be used to benefit the health, safety, and ergonomic efficiency of employees in offices and factories.

As discussed above, society has chosen which scientific and technological developments will define the modern world. Modern politics, religion, events, and society’s values of safety, privacy, exploration, health, and communication have driven technological progress in those respective fields. Although these sections have only been an overview of modern world STS, they serve as an introduction to the following in-depth studies of modern technology. While reading each subsequent section, think about these fundamental ideas: society’s effect on science technology, science and technology’s effect on society, aspects of society that drive technological and scientific advancement, and what causes society to accept or reject a particular advancement.

Chapter Questions

  • True or False: Medical devices produced from 3D printers cannot be used on patients, only for educational purposes.
  • True or False:  The first major cause of the Cold War was the increased tensions between the United States and the Soviet Union at the end of World War II.
  • A) Computing and applied mathematics
  • B) Materials science
  • C) Healthcare and medicine
  • D) All of the above
  • B) Aesthetics
  • C) Privacy Concerns
  • D) Lack of Celebrity Endorsement
  • Short Answer: Based on the information provided in this chapter, do you believe that 3-D printing things such as artificial organs is ethical? Why or why not? Support your answer with information in the chapter above.
  • Short Answer: Briefly discuss how a filter bubble works.

Allcott, H. & Gentzkow, M. (2017) Social media and fake news in the 2016 presidential election.  Journal of Economic Perspectives,  31 (2), 211-236. Retreived from https://www.aeaweb.org/articles?id=10.1257/jep.31.2.211

      G8INxwYjP5aXkt9tLUu34iqdWD9bg

Baird, D., Shew, A. (2004). Probing the history of scanning tunneling microscopy. In Discovering the Nanoscale.

Bardosova, M., Wagner, T. (2013). NATO Science for Peace and Security Series -C: Environmental Security Nanomaterials and Nanoarchitectures A Complex Review of Current Hot Topics and their Applications.

Barnhart, B. (2019, February). Everything you need to know about social media algorithms.  Sprout Social.  Retreived from https://sproutsocial.com/insights/social-media-algorithms/

Barone, M. (2006). Astroparticle, Particle And Space Physics, Detectors And Medical Physics Applications – Proceedings Of The 9th Conference. Hackensack, N.J.: World Scientific.  http://search.ebscohost.com.libproxy.clemson.edu/login.aspx?direct=true&db=e000xna&AN=210566

Becker, G. (2019, January 16). The elusive embryo: how women and men approach new reproductive technologies. Berkeley: University of California Press.   Center for Disease Control and Prevention. https://www.cdc.gov/reproductivehealth/infertility/index.htm

Bryan, W. (2016, February 1). Scratch-Resistant, UV-Reflecting Lenses. https://www.nasa.gov/offices/oct/40-years-of-nasa-spinoff/scratch-resistant-uv-reflecting-lenses

Brothers, K. B., & Rothstein, M. A. (2015). Ethical, legal and social implications of incorporating personalized medicine into healthcare. Personalized Medicine, 12(1), 43–51.

Browne,  M. W. (1985, Apr 02). ‘Star wars’ science expected to spawn peaceful inventions: Gains seen for medicine and industry. ‘star wars’ leading to peaceful inventions. New York Times (1923-Current File). http://libproxy.clemson.edu/login?url=https://search-proquest-com.libproxy.clemson.edu/docview/111234670?accountid=6167

Cha, A. E. (2018, April 27). How religion is coming to terms with modern fertility methods.

https://www.washingtonpost.com/graphics/2018/national/how-religion-is-coming-to-term

Chun, Y. S., Byun, K., & Lee, B. (2011). Induced pluripotent stem cells and personalized medicine: current progress and future perspectives. Anatomy & Cell Biology, 44(4), 245. https://doi.org/10.5115/acb.2011.44.4.245

Davis, W. (2014). The Eyes Have It: Likely to become a high-tech trend, Google Glass is already causing legal experts to see problems. ABA Journal, 100(4), 17-19. www.jstor.org/stable/441598

Digital Media Literacy: What is an echo chamber?  GCF Global.   https://edu.gcfglobal.org/en/digital-media-literacy/what-is-an-echo-chamber/1/

Donnell, D. O. (2018, November 14). Google Glass is coming back, but will stay in a business-oriented sandbox this time. https://www.notebookcheck.net/Google-Glass-is-coming-back-but-will-stay-in-a-business-oriented-sandbox-this-time.361716.0.html

Doyle, B. (2016, February 28). 5 Reasons Why Google Glass was a Miserable Failure. https://www.business2community.com/tech-gadgets/5-reasons-google-glass-miserable-failure-01462398

Essers, L. (2013, June 18). Google Glass privacy concerns raised by international data protection authorities.  https://www.pcworld.com/article/2042327/google-glass-privacy-concerns-raised-by-international-data-protection-authorities.html

Feenberg, A. (2012). Questioning technology. Routledge.

Getchell, M. (n.d.). The start of the Space Race (article). Retreived from https://www.khanacademy.org/humanities/us-history/postwarera/1950s-america/a/the-start-of-the-space-race

Gilbert, M. (2014). The second world war: a complete history. Rosetta Books.

Glass – Glass. (n.d.). https://www.google.com/glass/start/

Gollin, M. (2018, November). What are dark posts on social media?  Falcon.IO.   https://www.falcon.io/insights-hub/topics/social-media-strategy/what-are-dark-posts-on-social-media-2018/

Haggerty, J. J. (1985). Spinoff 1985 (pp. 104–105). Washington, D.C.: U.S. Government Printing Office. https://spinoff.nasa.gov/back_issues_archives/1985.pdf

Kollanyi, B., Howard, P.N., & Woolley, S.C. (2016). Bots and automation over twitter during the U.S. election. Oxford, UK: Project on Computational Propaganda. http://geography.oii.ox.ac.uk/wp-content/uploads/sites/89/2016/11/Data-Memo-US-Election.pdf

Levitt, I.M. (1960, Jan 31). Man in space: The next ten years: An astronomer details, step by step, the expansion of man’s horizons in prospect. man in space. New York Times (1923-Current File). http://libproxy.clemson.edu/login?url=https://search-proquest-com.libproxy.clemson.edu/docview/115209561?accountid=6167

McDougall, W. A. (2008). The Heavens and the Earth: A Political History of the Space Age. United States: Johns Hopkins University Press.

Nawrat, A. ( 2018, August 7). 3D Printing in the medical field: Four major applications revolutionizing the industry https://www.medicaldevice-network.com/features/3d-printing-in-the-medical-field-applications/

Oreskes, N., & Krige, J. (Eds.). (2014). Science and technology in the global Cold War. MIT Press.

Peng, W., Datta, P., Ayan, B., Ozbolat, V., Sosnoski, D., & Ozbolat, I. T. (2017, May 10). 3D bioprinting for drug discovery and development in pharmaceutics.  https://www.sciencedirect.com/science/article/abs/pii/S1742706117303069

Pogue, D. (2013). Google’s Creep Factor. Scientific American,308(6), 38-39. www.jstor.org/stable/26018264

Roco, M. C. (2003). Nanotechnology: Convergence with modern biology and medicine. Current Opinion in Biotechnology, 14(3), 337–346. https://doi.org/10.1016/S0958-1669(03)00068-5

Sahoo, S. K., Labhasetwar, V. (2003). Nanotech approaches to drug delivery and imaging. Drug Discovery Today, 8(24), 1112–1120. https://doi.org/10.1016/S1359-6446(03)02903-9

Sallam, H. N., Sallam, N. H. (2016, March 28). Religious aspects of assisted reproduction. Retreived from

      https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5096425/ 

Smith, T. (2018, September 14). Test tube baby Louise Brown and the birht of IVF https://www.cbsnews.com/news/test-tube-baby-louise-brown-and-the-birth-of-ivf/

Tarantola, A. (2019, August). Social media bots are damaging our democracy. Engadget.  https://www.engadget.com/2019/08/15/social-media-bots-are-damaging-our-democracy/

10 Greatest Scientific Discoveries and Inventions of 21st Century | ISB Glasgow. (n.d.).  https://www.isbglasgow.com/10-greatest-scientific-discoveries-and-inventions-of-21st-century/

3D-printed Surgical Tools (n.d.).  https://versoteq.com/blog/3d-printed-surgical-tools

Top 5 Applications for 3D Printing Technology. (2019, May 17). https://www.makerbot.com/stories/design/top-5-3d-printing-applications/

20 Inventions We Wouldn’t Have Without Space Travel. (n.d.).  https://www.jpl.nasa.gov/infographics/infographic.view.php?id=11358

Vogenberg, F. R., Barash, C. I., & Pursel, M. (2010). Personalized medicine – Part 1: Evolution and development into theranostics. P and T, 35(10). Smith, T. (2018, September 14). Test tube baby Louise Brown and the birht of IVF

Wang, H., Lee, M. K., & Wang, C. (1998). Consumer privacy concerns about Internet marketing. Communications of the ACM, 41(3), 63-70.

Weidner, J. B. (2020, January 29). How & Why Google Glass Failed. https://www.investopedia.com/articles/investing/052115/how-why-google-glass-failed.asp

Young, M. K., Hsu J., Neiman, D., Kou, C., Banktson, L., & Kim, S.Y., . . .Raskutti, G. (2018). The stealth media? Groups and targets behind divisive issue campaigns on Facebook.  Political Communication,  35 (4). Retreived from https://www.tandfonline.com/doi/full/10.1080/10584609.2018.1476425?casa_token=JpfzRtMcvW0AAAAA%3AhjAmbOz9d_Dakeq5axjcngquuTucWwewmLV44CKZMaW73Jc7g

“Magnetic Nanoparticles for Clinical Diagnostics and Therapy” by Aihua Fu, Ph.D. and Shan X. Wang, Ph.D. is in the Public Domain, CC0

“3D printer printing: Anycubic I3 Mega 3D Drucker” by Marco Verch is in the Public Domain, CC0

“Google Glass with frame” ” by Mikepanhu is in the Public Domain

“SpaceX Facility and Test Center” by PeakPx is in the Public Domain, CC0

“IBM PC 5150” by Boffy B, Wikipedia is in the Public Domain

“Sputnik-1. ” by paukrus is licensed under CC BY-SA 4.0

“Launch of Apollo 11” by NASA is licensed under CC BY-SA 2.0

“In vitro fertilization” by Wikipedia is in the Public Domain

“Social Networks” by Tracy Le Blanc is in the Public Domain

To the extent possible under law, Juliana-Marie Troyan; Maggie Elpers; Taylor Lorusso; Sevanna Boleman; Willis Watts; Joseph Rivera; David Jonah Lamothe; and Anthony Spearman have waived all copyright and related or neighboring rights to Science, Technology, & Society: A Student-Led Exploration , except where otherwise noted.

Share This Book

Feb 13, 2023

200-500 Word Example Essays about Technology

Got an essay assignment about technology check out these examples to inspire you.

Technology is a rapidly evolving field that has completely changed the way we live, work, and interact with one another. Technology has profoundly impacted our daily lives, from how we communicate with friends and family to how we access information and complete tasks. As a result, it's no surprise that technology is a popular topic for students writing essays.

But writing a technology essay can be challenging, especially for those needing more time or help with writer's block. This is where Jenni.ai comes in. Jenni.ai is an innovative AI tool explicitly designed for students who need help writing essays. With Jenni.ai, students can quickly and easily generate essays on various topics, including technology.

This blog post aims to provide readers with various example essays on technology, all generated by Jenni.ai. These essays will be a valuable resource for students looking for inspiration or guidance as they work on their essays. By reading through these example essays, students can better understand how technology can be approached and discussed in an essay.

Moreover, by signing up for a free trial with Jenni.ai, students can take advantage of this innovative tool and receive even more support as they work on their essays. Jenni.ai is designed to help students write essays faster and more efficiently, so they can focus on what truly matters – learning and growing as a student. Whether you're a student who is struggling with writer's block or simply looking for a convenient way to generate essays on a wide range of topics, Jenni.ai is the perfect solution.

The Impact of Technology on Society and Culture

Introduction:.

Technology has become an integral part of our daily lives and has dramatically impacted how we interact, communicate, and carry out various activities. Technological advancements have brought positive and negative changes to society and culture. In this article, we will explore the impact of technology on society and culture and how it has influenced different aspects of our lives.

Positive impact on communication:

Technology has dramatically improved communication and made it easier for people to connect from anywhere in the world. Social media platforms, instant messaging, and video conferencing have brought people closer, bridging geographical distances and cultural differences. This has made it easier for people to share information, exchange ideas, and collaborate on projects.

Positive impact on education:

Students and instructors now have access to a multitude of knowledge and resources because of the effect of technology on education . Students may now study at their speed and from any location thanks to online learning platforms, educational applications, and digital textbooks.

Negative impact on critical thinking and creativity:

Technological advancements have resulted in a reduction in critical thinking and creativity. With so much information at our fingertips, individuals have become more passive in their learning, relying on the internet for solutions rather than logic and inventiveness. As a result, independent thinking and problem-solving abilities have declined.

Positive impact on entertainment:

Technology has transformed how we access and consume entertainment. People may now access a wide range of entertainment alternatives from the comfort of their own homes thanks to streaming services, gaming platforms, and online content makers. The entertainment business has entered a new age of creativity and invention as a result of this.

Negative impact on attention span:

However, the continual bombardment of information and technological stimulation has also reduced attention span and the capacity to focus. People are easily distracted and need help focusing on a single activity for a long time. This has hampered productivity and the ability to accomplish duties.

The Ethics of Artificial Intelligence And Machine Learning

The development of artificial intelligence (AI) and machine learning (ML) technologies has been one of the most significant technological developments of the past several decades. These cutting-edge technologies have the potential to alter several sectors of society, including commerce, industry, healthcare, and entertainment. 

As with any new and quickly advancing technology, AI and ML ethics must be carefully studied. The usage of these technologies presents significant concerns around privacy, accountability, and command. As the use of AI and ML grows more ubiquitous, we must assess their possible influence on society and investigate the ethical issues that must be taken into account as these technologies continue to develop.

What are Artificial Intelligence and Machine Learning?

Artificial Intelligence is the simulation of human intelligence in machines designed to think and act like humans. Machine learning is a subfield of AI that enables computers to learn from data and improve their performance over time without being explicitly programmed.

The impact of AI and ML on Society

The use of AI and ML in various industries, such as healthcare, finance, and retail, has brought many benefits. For example, AI-powered medical diagnosis systems can identify diseases faster and more accurately than human doctors. However, there are also concerns about job displacement and the potential for AI to perpetuate societal biases.

The Ethical Considerations of AI and ML

A. Bias in AI algorithms

One of the critical ethical concerns about AI and ML is the potential for algorithms to perpetuate existing biases. This can occur if the data used to train these algorithms reflects the preferences of the people who created it. As a result, AI systems can perpetuate these biases and discriminate against certain groups of people.

B. Responsibility for AI-generated decisions

Another ethical concern is the responsibility for decisions made by AI systems. For example, who is responsible for the damage if a self-driving car causes an accident? The manufacturer of the vehicle, the software developer, or the AI algorithm itself?

C. The potential for misuse of AI and ML

AI and ML can also be used for malicious purposes, such as cyberattacks and misinformation. The need for more regulation and oversight in developing and using these technologies makes it difficult to prevent misuse.

The developments in AI and ML have given numerous benefits to humanity, but they also present significant ethical concerns that must be addressed. We must assess the repercussions of new technologies on society, implement methods to limit the associated dangers, and guarantee that they are utilized for the greater good. As AI and ML continue to play an ever-increasing role in our daily lives, we must engage in an open and frank discussion regarding their ethics.

The Future of Work And Automation

Rapid technological breakthroughs in recent years have brought about considerable changes in our way of life and work. Concerns regarding the influence of artificial intelligence and machine learning on the future of work and employment have increased alongside the development of these technologies. This article will examine the possible advantages and disadvantages of automation and its influence on the labor market, employees, and the economy.

The Advantages of Automation

Automation in the workplace offers various benefits, including higher efficiency and production, fewer mistakes, and enhanced precision. Automated processes may accomplish repetitive jobs quickly and precisely, allowing employees to concentrate on more complex and creative activities. Additionally, automation may save organizations money since it removes the need to pay for labor and minimizes the danger of workplace accidents.

The Potential Disadvantages of Automation

However, automation has significant disadvantages, including job loss and income stagnation. As robots and computers replace human labor in particular industries, there is a danger that many workers may lose their jobs, resulting in higher unemployment and more significant economic disparity. Moreover, if automation is not adequately regulated and managed, it might lead to stagnant wages and a deterioration in employees' standard of life.

The Future of Work and Automation

Despite these difficulties, automation will likely influence how labor is done. As a result, firms, employees, and governments must take early measures to solve possible issues and reap the rewards of automation. This might entail funding worker retraining programs, enhancing education and skill development, and implementing regulations that support equality and justice at work.

IV. The Need for Ethical Considerations

We must consider the ethical ramifications of automation and its effects on society as technology develops. The impact on employees and their rights, possible hazards to privacy and security, and the duty of corporations and governments to ensure that automation is utilized responsibly and ethically are all factors to be taken into account.

Conclusion:

To summarise, the future of employment and automation will most certainly be defined by a complex interaction of technological advances, economic trends, and cultural ideals. All stakeholders must work together to handle the problems and possibilities presented by automation and ensure that technology is employed to benefit society as a whole.

The Role of Technology in Education

Introduction.

Nearly every part of our lives has been transformed by technology, and education is no different. Today's students have greater access to knowledge, opportunities, and resources than ever before, and technology is becoming a more significant part of their educational experience. Technology is transforming how we think about education and creating new opportunities for learners of all ages, from online courses and virtual classrooms to instructional applications and augmented reality.

Technology's Benefits for Education

The capacity to tailor learning is one of technology's most significant benefits in education. Students may customize their education to meet their unique needs and interests since they can access online information and tools. 

For instance, people can enroll in online classes on topics they are interested in, get tailored feedback on their work, and engage in virtual discussions with peers and subject matter experts worldwide. As a result, pupils are better able to acquire and develop the abilities and information necessary for success.

Challenges and Concerns

Despite the numerous advantages of technology in education, there are also obstacles and considerations to consider. One issue is the growing reliance on technology and the possibility that pupils would become overly dependent on it. This might result in a lack of critical thinking and problem-solving abilities, as students may become passive learners who only follow instructions and rely on technology to complete their assignments.

Another obstacle is the digital divide between those who have access to technology and those who do not. This division can exacerbate the achievement gap between pupils and produce uneven educational and professional growth chances. To reduce these consequences, all students must have access to the technology and resources necessary for success.

In conclusion, technology is rapidly becoming an integral part of the classroom experience and has the potential to alter the way we learn radically. 

Technology can help students flourish and realize their full potential by giving them access to individualized instruction, tools, and opportunities. While the benefits of technology in the classroom are undeniable, it's crucial to be mindful of the risks and take precautions to guarantee that all kids have access to the tools they need to thrive.

The Influence of Technology On Personal Relationships And Communication 

Technological advancements have profoundly altered how individuals connect and exchange information. It has changed the world in many ways in only a few decades. Because of the rise of the internet and various social media sites, maintaining relationships with people from all walks of life is now simpler than ever. 

However, concerns about how these developments may affect interpersonal connections and dialogue are inevitable in an era of rapid technological growth. In this piece, we'll discuss how the prevalence of digital media has altered our interpersonal connections and the language we use to express ourselves.

Direct Effect on Direct Interaction:

The disruption of face-to-face communication is a particularly stark example of how technology has impacted human connections. The quality of interpersonal connections has suffered due to people's growing preference for digital over human communication. Technology has been demonstrated to reduce the usage of nonverbal signs such as facial expressions, tone of voice, and other indicators of emotional investment in the connection.

Positive Impact on Long-Distance Relationships:

Yet there are positives to be found as well. Long-distance relationships have also benefited from technological advancements. The development of technologies such as video conferencing, instant messaging, and social media has made it possible for individuals to keep in touch with distant loved ones. It has become simpler for individuals to stay in touch and feel connected despite geographical distance.

The Effects of Social Media on Personal Connections:

The widespread use of social media has had far-reaching consequences, especially on the quality of interpersonal interactions. Social media has positive and harmful effects on relationships since it allows people to keep in touch and share life's milestones.

Unfortunately, social media has made it all too easy to compare oneself to others, which may lead to emotions of jealousy and a general decline in confidence. Furthermore, social media might cause people to have inflated expectations of themselves and their relationships.

A Personal Perspective on the Intersection of Technology and Romance

Technological advancements have also altered physical touch and closeness. Virtual reality and other technologies have allowed people to feel physical contact and familiarity in a digital setting. This might be a promising breakthrough, but it has some potential downsides. 

Experts are concerned that people's growing dependence on technology for intimacy may lead to less time spent communicating face-to-face and less emphasis on physical contact, both of which are important for maintaining good relationships.

In conclusion, technological advancements have significantly affected the quality of interpersonal connections and the exchange of information. Even though technology has made it simpler to maintain personal relationships, it has chilled interpersonal interactions between people. 

Keeping tabs on how technology is changing our lives and making adjustments as necessary is essential as we move forward. Boundaries and prioritizing in-person conversation and physical touch in close relationships may help reduce the harm it causes.

The Security and Privacy Implications of Increased Technology Use and Data Collection

The fast development of technology over the past few decades has made its way into every aspect of our life. Technology has improved many facets of our life, from communication to commerce. However, significant privacy and security problems have emerged due to the broad adoption of technology. In this essay, we'll look at how the widespread use of technological solutions and the subsequent explosion in collected data affects our right to privacy and security.

Data Mining and Privacy Concerns

Risk of Cyber Attacks and Data Loss

The Widespread Use of Encryption and Other Safety Mechanisms

The Privacy and Security of the Future in a Globalized Information Age

Obtaining and Using Individual Information

The acquisition and use of private information is a significant cause for privacy alarm in the digital age. Data about their customers' online habits, interests, and personal information is a valuable commodity for many internet firms. Besides tailored advertising, this information may be used for other, less desirable things like identity theft or cyber assaults.

Moreover, many individuals need to be made aware of what data is being gathered from them or how it is being utilized because of the lack of transparency around gathering personal information. Privacy and data security have become increasingly contentious as a result.

Data breaches and other forms of cyber-attack pose a severe risk.

The risk of cyber assaults and data breaches is another big issue of worry. More people are using more devices, which means more opportunities for cybercriminals to steal private information like credit card numbers and other identifying data. This may cause monetary damages and harm one's reputation or identity.

Many high-profile data breaches have occurred in recent years, exposing the personal information of millions of individuals and raising serious concerns about the safety of this information. Companies and governments have responded to this problem by adopting new security methods like encryption and multi-factor authentication.

Many businesses now use encryption and other security measures to protect themselves from cybercriminals and data thieves. Encryption keeps sensitive information hidden by encoding it so that only those possessing the corresponding key can decipher it. This prevents private information like bank account numbers or social security numbers from falling into the wrong hands.

Firewalls, virus scanners, and two-factor authentication are all additional security precautions that may be used with encryption. While these safeguards do much to stave against cyber assaults, they are not entirely impregnable, and data breaches are still possible.

The Future of Privacy and Security in a Technologically Advanced World

There's little doubt that concerns about privacy and security will persist even as technology improves. There must be strict safeguards to secure people's private information as more and more of it is transferred and kept digitally. To achieve this goal, it may be necessary to implement novel technologies and heightened levels of protection and to revise the rules and regulations regulating the collection and storage of private information.

Individuals and businesses are understandably concerned about the security and privacy consequences of widespread technological use and data collecting. There are numerous obstacles to overcome in a society where technology plays an increasingly important role, from acquiring and using personal data to the risk of cyber-attacks and data breaches. Companies and governments must keep spending money on security measures and working to educate people about the significance of privacy and security if personal data is to remain safe.

In conclusion, technology has profoundly impacted virtually every aspect of our lives, including society and culture, ethics, work, education, personal relationships, and security and privacy. The rise of artificial intelligence and machine learning has presented new ethical considerations, while automation is transforming the future of work. 

In education, technology has revolutionized the way we learn and access information. At the same time, our dependence on technology has brought new challenges in terms of personal relationships, communication, security, and privacy.

Jenni.ai is an AI tool that can help students write essays easily and quickly. Whether you're looking, for example, for essays on any of these topics or are seeking assistance in writing your essay, Jenni.ai offers a convenient solution. Sign up for a free trial today and experience the benefits of AI-powered writing assistance for yourself.

Start Writing With Jenni Today

Sign up for a free Jenni AI account today. Unlock your research potential and experience the difference for yourself. Your journey to academic excellence starts here.

Home — Essay Samples — Information Science and Technology — Technology in Education — The Role of Technology in Modern Education

test_template

The Role of Technology in Modern Education

  • Categories: Education Goals Technology in Education

About this sample

close

Words: 538 |

Published: Jun 13, 2024

Words: 538 | Page: 1 | 3 min read

Table of contents

Introduction, body paragraph 1: the benefits of technology in education, body paragraph 2: enhancing engagement and collaboration, body paragraph 3: overcoming challenges in technological integration, body paragraph 4: the future of technology in education.

Image of Alex Wood

Cite this Essay

To export a reference to this article please select a referencing style below:

Let us write you an essay from scratch

  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours

Get high-quality help

author

Verified writer

  • Expert in: Education Information Science and Technology

writer

+ 120 experts online

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy . We’ll occasionally send you promo and account related email

No need to pay just yet!

Related Essays

2 pages / 926 words

2 pages / 908 words

3 pages / 1198 words

3 pages / 1523 words

Remember! This is just a sample.

You can get your custom paper by one of our expert writers.

121 writers online

Still can’t find what you need?

Browse our vast selection of original essay samples, each expertly formatted and styled

Related Essays on Technology in Education

The debate over whether mobile phones should be banned in schools underscores the need for a nuanced and informed approach. While concerns about distractions and misuse are valid, the advantages of incorporating mobile phones [...]

In the 21st century, technology has become an integral part of our daily lives, and its influence has extended to the field of education. The integration of technology into education has sparked debates among educators, parents, [...]

With the rise of technology and the internet, children today have access to an unprecedented amount of information at their fingertips. This has sparked a debate among educators and parents about whether the internet is making [...]

Education has long been heralded as the cornerstone of societal advancement. From ancient civilizations to modern-day societies, the role of education in shaping the minds and futures of individuals cannot be overstated. It [...]

This paper entitled 'Usage of Technology in Language Classrooms'envisages on the major role that technology plays in aiding the manual teacher to deliver the lessons with both information and entertainment. The importance of [...]

Education is necessary for everyone. Education is very important, without education no one can lead a good life. Education is of two kinds i.e. natural and nurture. In nurture, education given to a child requires proper [...]

Related Topics

By clicking “Send”, you agree to our Terms of service and Privacy statement . We will occasionally send you account related emails.

Where do you want us to send this sample?

By clicking “Continue”, you agree to our terms of service and privacy policy.

Be careful. This essay is not unique

This essay was donated by a student and is likely to have been used and submitted before

Download this Sample

Free samples may contain mistakes and not unique parts

Sorry, we could not paraphrase this essay. Our professional writers can rewrite it and get you a unique paper.

Please check your inbox.

We can write you a custom essay that will follow your exact instructions and meet the deadlines. Let's fix your grades together!

Get Your Personalized Essay in 3 Hours or Less!

We use cookies to personalyze your web-site experience. By continuing we’ll assume you board with our cookie policy .

  • Instructions Followed To The Letter
  • Deadlines Met At Every Stage
  • Unique And Plagiarism Free

essay on technological development of modern world

  • Search Menu
  • Sign in through your institution
  • Conflict, Security, and Defence
  • East Asia and Pacific
  • Energy and Environment
  • Global Health and Development
  • International History
  • International Governance, Law, and Ethics
  • International Relations Theory
  • Middle East and North Africa
  • Political Economy and Economics
  • Russia and Eurasia
  • Sub-Saharan Africa
  • Advance Articles
  • Editor's Choice
  • Special Issues
  • Virtual Issues
  • Reading Lists
  • Archive Collections
  • Book Reviews
  • Author Guidelines
  • Submission Site
  • Open Access
  • Self-Archiving Policy
  • About International Affairs
  • About Chatham House
  • Editorial Board
  • Advertising & Corporate Services
  • Journals Career Network
  • Journals on Oxford Academic
  • Books on Oxford Academic

Article Contents

Reconceptualizing war: the rise of post-modern war, 1945–1989, the persistence of post-modern war after the cold war, post-modern war and the future of the state.

  • < Previous

Technology, war and the state: past, present and future

This article is part of a special issue of International Affairs (July 2019) on ‘Re-visioning war and the state in the twenty-first century’, guest-edited by Tracey German.

  • Article contents
  • Figures & tables
  • Supplementary Data

Warren Chin, Technology, war and the state: past, present and future, International Affairs , Volume 95, Issue 4, July 2019, Pages 765–783, https://doi.org/10.1093/ia/iiz106

  • Permissions Icon Permissions

War made the state, and the state made war, but does this statement hold true today? Will it apply in the future? The consensus is that the absence of major war within the western world, post 1945, did cause the war–state relationship to change, but each became significantly less important to the other. This article argues that the relationship was closer and deeper than has been assumed. It proposes that the peculiar strategic conditions created by the nuclear age caused states to wage a ritualistic style of war, in which demonstration rather than the physical application of violence became increasingly important. Within this setting, the state drove the process of technological innovation in defence to its limits in an effort to demonstrate its military superiority. This massive peacetime investment in defence technology exerted a huge impact on the character of war, which led to new strategic forms. However, most importantly, the diffusion of military technology also affected the wider economy and society, leading to a form of internal power transition within states. The author speculates on how these elemental forces will play out in the future, what will happen to war and the state, and whether we will reach a point where war leads to the unmaking of the state.

This article explores the changing relationship between war and the state in the western world since the end of the Second World War. Specifically, it analyses how that relationship evolved during and after the Cold War, and extrapolates from current trends to speculate what impact war will have on the future evolution of the state. Our understanding of the connection between war and the state assumes that war played an instrumental role in the formation of the state in the early modern period. The synergistic relationship established at that time then blossomed over the next four centuries, during which both state and war grew exponentially. However, this expansion was checked by the declining incidence and scale of interstate war after 1945, which eventually allowed new political and economic priorities to emerge that resulted in the reshaping of, and a changed role for, the state. 1

The article presents an alternative view of the war–state relationship in the post-Second World War era. It does not challenge the logic that the decline in war affected the war–state connection. 2 However, it does not see this change as evidence of atrophy. Instead, it demonstrates how the complexity of war after 1945 led to a deep but more subtle interaction, which had a profound effect on war, the state and society in the western world. While I do not challenge the premise that a range of factors played a role in shaping the connection between war and the state, the precise interaction and relative importance of these forces have altered over time, and this has caused the demands of war on the state to shift in significant ways. In the period under scrutiny in this article, I argue that the role of technology in war increased dramatically because of the nuclear revolution. In this setting, technological development reduced the opportunities for war, but the arms race it generated also brought into being new technologies, and these facilitated new forms of conflict. These developments affected our understanding of war's character and its interaction with the state.

Military history provides a rich literature on war and technology, but its focus has tended to be on the importance of technology in helping militaries win wars. 3 In rarer cases, writers have sought to situate war within a broader technological, economic, social and cultural framework. 4 This is where the principal focus of the present article lies. However, my aim is to turn this domain upside down and explore not just how the world has changed (and continues to change) war, but how the war–technology dynamic has changed the world, in what might be described as a form of positive feedback. To this end, I expand and build on the historical overview presented by William McNeill and Maurice Pearton of the financial and technical linkages forged between war and the state starting in the late nineteenth century. 5 This provides a conceptual framework within which to explore how that relationship evolved and how it might change in the future. Most importantly, this construct allows the contemporary war–state relationship to be viewed through a different lens, one that sees a stronger, darker and more damaging connection than is generally recognized.

In addressing this issue, I have relied on the experiences of the United States and United Kingdom, as representative examples of western states, to support the propositions set out here. Most importantly, in both cases the state played a leading role in promoting defence research after 1945; technology was of central importance in their strategic frameworks, and continues to be so today. Second, both states consciously exploited defence technology to promote wider economic prosperity. I recognize that attempts to look into the future carry a great deal of risk. I am aware of this risk and explain below how I have taken it into account. The only general point I would make here is that history also shows that, sometimes, military forecasting is successful. I have looked at these examples and drawn on their methodologies.

In sum, the central argument of this article is that, after 1945, technology acted as a vital agent of change in the war–state relationship, and eventually the ripples of this change spread throughout society. To illustrate this point, you have only to look at the ubiquitous smartphone and the genesis of technologies produced by defence research that made it possible. This capability has in turn affected the conduct of war; and this has affected the state. Thus the smartphone provides just one significant example of how technology and war are shaping the state and the world we live in. 6

The article is divided into three parts. The first explores the war–state relationship and the factors that shaped it during the Cold War. It explains why technological innovation became so important in war, and how this imperative influenced both our understanding of war and the interaction between war and the state. The second section examines why the imperative for technological innovation persisted, and why the war–state infrastructure survived in the post-Cold War era. Finally, the third section explores how current trends might influence the war–state relationship in the future.

Clausewitz missed the importance of technology as a variable in his analysis of war. 7 Tilly, one of the most critical commentators on the war–state relationship, was also sceptical about the importance of technology in this process, and focused instead on the economics of waging war. 8 The omission is understandable, because the history of war is characterized by long phases of technological stagnation punctuated by occasional spasms of revolutionary change caused by a variety of forces. 9 This point is illustrated by a cursory glance at naval technology, which shows that ship design and armaments in Europe remained largely unchanged from 1560 to 1850. 10 However, I contend that the importance of technology increased dramatically in the conduct of war from the nineteenth century onwards, for three reasons. The first was the impact of the Industrial Revolution. This period of sustained and rapid technological innovation eventually affected all areas of human activity, including war. Evidence of the increased pace in technological change can be seen from Schumpeter's economic analysis of capitalism and its relationship to technology. In his view, four long economic cycles in the Industrial Revolution led to ground-breaking changes in the mode of production in little more than a hundred years. 11 At the microeconomic level, Schumpeter also challenged economic orthodoxy by arguing that capitalism was based not on price competitiveness but on innovation, via the creation of ‘the new commodity, the new technology, the new source of supply, the new type of organisation’. Schumpeter called this the process of ‘creative destruction’ as firms seek to innovate to achieve a position of monopoly and thereby maximize profits until that advantage is cancelled out by the next innovation. 12

During this time, the technological needs of the armed forces ‘were met out of the same scientific and technical knowledge that manufacturing industry had put to use in satisfying its commercial needs’. 13 As such, wider forces fed into the realm of war. However, this situation slowly changed such that the demands for military technology eventually shaped the wider context in which it existed—which brings us to the second reason why the importance of technology increased. O'Neill demonstrates how the state began to assume a role as a sponsor of technological innovation in defence in the late nineteenth century as the military became increasingly interested in the exploitation of technology. Such state sponsorship of innovation was termed ‘command technology’. 14 However, as Hartcup observed, this process of innovation operated within military, fiscal and time constraints that imposed a limit on the ambition of defence research. 15 In general, mass industrialized war in the twentieth century emphasized quantity more than quality, and required the mobilization of society and the economy via the state. The demands of war also resulted in the state expanding into the provision of education and health care to ensure the population was fit to wage war. Even liberal Britain succumbed to this view of the state. 16 These features eventually became the defining characteristics of what Hables Gray called ‘modern war’. 17

The advent of the nuclear age precipitated a profound change in the organization and conduct of war. Hables Gray asserts that 1945 marks the dividing line between modern war and the birth of what he terms post-modern war. 18 This philosophical construct is used as intended by post-modernism, not as a label, but as a way of indicating that war, like many forms of human activity, is a discourse. 19 That discourse changed profoundly after 1945 because at that point scientific advance, in the form of nuclear weapons, made modern war impossible. This new strategic setting precipitated what Holsti described as the diversification of warfare; and this in turn resulted in a blurring of the line between peace and war as governments employed a range of means to achieve their policy goals below the threshold of general war. Most importantly, the forms of war proliferated as new ways were devised to employ war as a political tool in a nuclear world. 20 This change did not render Clausewitz's concept of war obsolete, but it did require it to be adapted. 21

Clausewitz explained that ‘war is an act of violence to compel our opponent to fulfil our will’. 22 War is also the continuation of policy by other means. 23 War, then, is defined as a discourse of physical violence to achieve a political goal. However, in examining the post-1945 war–state relationship in the West, we need to revise our understanding of war so that it extends beyond physical violence and bloodshed. Russian military reflections on the Cold War reveal an interesting narrative that reinforces this expansion of war beyond its traditional domain. According to this analysis, the Soviet Union lost the Cold War because it was defeated by non-military means employed by its enemy that focused on psychological, political, information, social and economic attacks against the Soviet state. 24 Although this interpretation can be contested, it is important to acknowledge that states used both military and non-military levers to confront their enemies in this conflict. Technology played a vital role in facilitating this process, for example via the communications revolution, which facilitated the waging of activities such as political warfare. However, the most salient aspect of the Cold War was the discourse of deterrence. Within this context, the rituals of war in terms of organizing, preparing and demonstrating an ability to fight nuclear war in the hope of deterring potential opponents and thereby preventing the possibility of war became substitutes for organized violence. Small wars happened on the periphery of the US and Soviet geopolitical space, but in the core region, a different kind of cognitive and cultural violence emerged, which can be seen as a form of war. 25

How, then, did technology fit into this new discourse of war? According to Buzan, because nuclear deterrence relied on anticipated weapons performance, it became sensitive to technical innovation, which meant the state had to respond to technological change by investing in defence research to maintain the credibility of its deterrent. 26 As a result, a premium came to be placed on technological innovation in defence, and this caused the role of the state in military research to expand. 27 Consequently, states came to play an essential part in a military version of Schumpeter's process of creative destruction, albeit in the realm of defence. The role of the state was vital because it was the state that provided the critical financial resources required to take embryonic technologies and develop them at a speed unlikely to be matched by the civilian market. This facilitated a profound change in the relationship between the state and private industry and undermined the operation of the free market as governments opted to support defence contractors capable of conducting large and complex forms of research and development (R&D). 28 This trend did not go unnoticed; in 1961, President Dwight Eisenhower warned against the pernicious influence exerted by the creation of a military–industrial complex (MIC), a construct which referred to the incestuous relationship between the military, defence industries and politicians acting in concert as an interest group to persuade the state to spend more on defence. 29 Harold Laswell also noted the rising prominence of the military in peacetime in his thesis of the ‘garrison state’, which described the potential militarization of the American polity. 30 Samuel Huntington echoed this concern in his book The soldier and the state , which considered how the United States could manage an immense military establishment in a time of peace without jeopardizing the sanctity of its democracy. 31 These debates and themes waxed and waned as the Cold War progressed, but they persisted, and even in the 1980s the notion of the MIC was still being discussed. 32 The strategic logic of nuclear deterrence created a climate which justified high defence spending and significant investment in defence research—but why did this infrastructure persist in the more benign environment of the post-Cold War world?

The end of the Cold War resulted in a significant fall in defence expenditure. Equally importantly, the state reduced its participation in sustaining defence research and allowed the private sector to play a more prominent role in defence production. In the UK, where the nationalized defence industries had already been privatized in the 1980s, this process was extended to include the sale of the state's defence research and development arm. This change in industrial and technological policy reflected a broader adjustment as the state lost its position in the vanguard of the technological revolution. Since the start of the Cold War, US government-funded defence research had given rise to technologies such as the internet, virtual reality, jet travel, data joining, closed-circuit TV, global positioning, rocketry, remote control, microwaves, radar, global positioning, networked computers, wireless communications and satellite surveillance. 33 The subsequent exploitation of these technologies by the private sector reflected a conscious policy choice by most western governments, which was to promote technology spinoffs from defence research into the wider economy as a way of generating wealth creation. 34 Once the technology had been created, the civil, commercial sector proved adept at adapting and changing the new capabilities. The critical difference between innovation in the defence market and its civilian counterpart was that, in the latter, high rates of consumption led to product and process innovation by companies. As a result, civil technology providers increasingly took the lead in the information revolution. Given this new dynamism, military power relied increasingly on the existing pool of technological knowledge within the broader economy. The increasing emphasis on quality in war also generated greater complexity during operations. This trend facilitated the rise of private military companies in the post-Cold War era and resulted in western states increasingly subcontracting the provision of internal and external security to the private sector. 35

However, in spite of the end of the Cold War, western governments continued to have an appetite for technological innovation and its integration into ever more complex weapons. Indeed, an important feature of post-modern war was that machines assumed an unprecedented importance in the post-Cold War era. As Hables Gray explained: ‘War is a discourse system, but each type of war has different rules of discourse. In postmodern war, the central role of human bodies in war is being eclipsed rhetorically by the growing importance of machines.’ 36

The First Gulf War was an important marker because it revealed to western society the power of technology, at least in a conventional war. As Freedman observed, this conflict resolved the high-tech versus low-tech debate which had persisted throughout the Cold War. 37 Observers now spoke of a paradigm shift in the conduct of war and a revolution in military affairs (RMA) caused by technological advance in computers and communications. 38 Paradoxically, cuts in defence spending and provision compounded the drive to rely on technology in war as smaller militaries sought to pack a bigger punch to compensate for their lack of mass. 39 In the 1990s, the RMA served another purpose in that it allowed for the creation of what Shaw described as ‘risk-free’ war. Technology allowed western states to engage targets at long range with high accuracy, but at no risk to those firing the weapons—something that became very useful in an era of wars of choice. 40 Perhaps the best example of the strengths and weaknesses of this approach was NATO's 78-day bombing campaign against Serbia in 1999. 41

Technological innovation in the techniques of war allowed the state to continue using force as an instrument of policy, especially in those instances where there was no clear political consensus on taking military action. In sum, the state continued to see its security through the prism of technological advance; and this, in turn, helped to sustain the MIC in that brief period between the end of the Cold War and the start of the ‘war on terror’. The idea of an MIC persists today. For example, David Keen points to the powerful economic functions fulfilled by the war on terror, which he believed explained the persistence of a war based on counterproductive strategy and tactics. 42 More recently, Paul Rogers has referred to the creation of a military–industrial academic–bureaucratic complex, which is exploiting the latest iteration of the war on terror: the war against the so-called ‘Islamic State in Iraq and Syria’ (ISIS). 43 While the technology paradigm was briefly challenged in Iraq in 2006 and replaced by a more labour-intensive approach to war, as articulated in the principles of counter-insurgency, this, in turn, was quickly replaced by less risky, more capital-intensive techniques of war waged with satellites, robots, drones, precision weaponry and special forces. 44 In summary, the elaborate infrastructure of war created during the Cold War endured in the post-Cold War era before being reinvigorated by the fiscal stimulus generated by the war on terror. During this period technology was viewed almost as a silver bullet. As such, it provided a neat answer to complex questions posed by the human and physical terrain of war. Most importantly, for a brief moment at least, it allowed western states to reimagine decisive victories and tidy peace settlements. 45 Such was the allure of technology that Coker speculated on the possibility of a future ‘post-human warfare’ in which machines replaced humanity on the battlefield. 46

How, then, will predicted developments in technology shape the future of war and the state? This is a question that is causing much anxiety in both academic and policy-making circles. As Freedman points out, the future is based on decisions that have yet to be made in circumstances that remain unclear to those looking into a crystal ball. 47 Just as important as this uncertainty are those biases that shape our preferences regarding how we see the future. Cohen has pointed out that debates on the future of war often suffer from being technologically sanitized, ignoring politics and therefore lacking a meaningful context. 48 As a result, the ‘future war’ literature often suffers from an overreliance on a simplistic overview of decisive military technologies. I address these problems in two ways.

The first is to follow the advice offered by the sociologist Michael Mann, who observed that no one could accurately predict the future of large-scale power structures like the state; the most one can do is provide alternative scenarios of what might happen given different conditions, and in some cases to arrange them in order of probability. 49 The UK's Concepts and Doctrine Centre adopted this approach and set out multiple scenarios to support its analysis of future strategic trends. 50 Second, it is essential to widen the lens through which the future is projected and to understand the political context within which technology, war and the state will all be situated. To this end, I adopt here the Clausewitzian framework of analysis which Colin Gray employed in considering future war. As he explains:

Future warfare can be approached in the light of the vital distinction drawn by Clausewitz, between war's ‘grammar’ and its policy ‘logic’. Both avenues must be travelled here. Future warfare viewed as grammar requires us to probe probable and possible developments in military science, with reference to how war actually could be waged. From the perspective of policy logic we need to explore official motivations to fight. 51

In exploring the future relationship between war and the state, and the role played by technology, two possible visions are presented here. The first explores the continuation of the status quo and represents the default setting of both the UK and US governments with regard to the future. The second follows the recommendation offered by Paul Davis, who advised when selecting a scenario to choose a vision that challenges and provokes controversy and that breaks out of orthodox thinking. 52

Both models have one thing in common: they will be influenced by what might be seen as the next wave of technological change. This latest technical convulsion is illustrated by Schwab's idea of the fourth Industrial Revolution, which is a crude facsimile of Schumpeter's theory of long economic cycles. The fourth Industrial Revolution builds on the digital revolution, which began in the 1960s, but differs from it in that it entails ‘a much more ubiquitous and mobile internet, … smaller and more powerful sensors that have become cheaper, and … powerful artificial intelligence (AI) and machine learning’. 53 The term ‘artificial intelligence’ was first used by the American scientist John McCarthy in 1956. According to his definition, AI is merely the development of computer systems to perform tasks that generally need human intelligence, such as speech recognition, visual perception and decision-making. More recently, Max Tegmark has defined AI as a non-biological intelligence possessing the capability to accomplish any complex task at least as well as humans. 54 Currently, the exponential rise of AI is being driven by three developments in the world of computing: smarter algorithms, a vast increase in computing power and an ability to process vast quantities of data. 55 What this means is that humans are now being challenged by machines in the cognitive as well as the physical domains of work. Digital technologies that have computer hardware, software and networks at their core are not new, but represent a break with the third Industrial Revolution because of the level of sophistication and integration within and between them. These technologies are transforming societies and the global economy.

The fourth Industrial Revolution is not only about smart and connected machines and systems. It is linked with other areas of scientific innovation ranging from gene sequencing to nanotech and from renewables to computing. It is the fusion of these technologies and their interaction across the physical, digital and biological domains that make the fourth Industrial Revolution fundamentally different from previous epochs. Emerging technologies and broad-based innovations are diffusing much more quickly and more widely than their predecessors, which continue to unfold in some parts of the world. It took the spindle, the hallmark of the first Industrial Revolution, 120 years to spread outside Europe; by contrast, the internet permeated the globe in less than a decade. 56 In sum, it is not one specific technology but the sheer number of technologies and the interaction between them that is creating change on such an unprecedented scale that Schwab believes it can be described as a revolution. What, then, does this mean for the relationship between war and the state?

The first model of the future adopts a ‘business as normal’ scenario. In this version of the future, the policy logic of war remains focused on the security of the state and concentrates on state-based threats. The principal causes of war can be identified in the anarchy of the international system. 57 The state preserves its monopoly on the use of force because the barriers to entry into the weapons market remain high. In addition, the state continues to function effectively and to be able to extract the resources needed to maintain its legitimacy and territorial integrity. Within this context, the state still pursues the development of advanced technologies to defend against mostly state-based threats. In this scenario, future war is imagined as a symmetrical contest between conventional forces on an increasingly automated battlefield. Within this space, humans will be augmented and in some instances replaced by AI and robots contending with increasingly lethal forms of weaponry. 58

In this vision of the future, the military's pursuit of the next technology follows a familiar pattern, and the risk and uncertainty involved continue to make state finance and policy support indispensable to defence research. The most recent example of this activity is the UK government's promise to share with British Aerospace the cost of funding the development of a technology demonstrator for the next generation of fighter aircraft. Named Tempest, this fighter can operate either as a manned or as an unmanned aircraft; it will rely on AI and employ directed energy weapons. 59 A grander example of the status quo scenario is the American-led ‘Third Offset’ strategy, a programme designed to preserve America's military-technological superiority. At the core of the Third Offset is the intention to exploit advances in machine autonomy, AI, quantum computing and enhanced digital communications to improve the man–machine interface in the future battlespace. 60 The United States is investing US$18 billion in the creation of these capabilities, even though it is not clear how feasible the development of technologies such as AI will be. 61

It is important to note that non-western states are also pursuing these policies. The outstanding example here is China. Its economic model, which is based on state-sponsored capitalism, is enabling it to work in a close partnership with privately owned Chinese tech firms to achieve a broad-based technological self-sufficiency in both commerce and defence. 62 Investment in research and development has grown by 20 per cent per year since 1999 to the point where China now spends US$233 billion per annum, a sum that accounts for 20 per cent of the world's research and development spending. 63 Three technologies, it is claimed, matter most to China, and all three relate to its ability to control the internet. These are semiconductors, quantum computing and AI. 64 In 2017, China accounted for 48 per cent of all AI venture funding, and the Beijing government aims to be the centre of global innovation in AI by 2030. 65

In this scenario, then, the state can harvest and refine a range of new technologies generated by the private rather than the public sector in a manner that preserves its monopoly on the use of force. At the same time, that monopoly is reinforced because of the complexity of these capabilities and the challenges posed in their use on operations, which require well-trained and professional forces. Private military companies will persist, but their existence will rely on their ability to draw on this pool of trained personnel created by the state to populate their organizations, which means they will support, not challenge, the state's role as a provider of security.

In the second scenario of the future, the policy logic of war reflects a darker, dystopian image of the relationship between war and the state. In this setting, conflict is a product of desperation caused by scarcity, which is occurring on a global scale. Most importantly, the causes of war lie within states as well as between them. In this multifaceted crisis, technological change is weakening rather than strengthening the state and undermining its ability to cope with the tsunami of problems sweeping over it. The debate over this view of the future policy logic of war began in 1972 with the publication of a hugely controversial book called The limits to growth . 66 This study explored the impact of population growth, industrialization, pollution, and resource and agricultural shortages on the global economic system. Its principal conclusion was that population growth would create an insatiable demand for goods, outstripping the finite resource base of the planet. Humanity's efforts to address this imbalance in demand and supply by increasing productivity would be self-defeating and cause a host of environmental problems. In spite of the passage of time since its first appearance, this book set out themes that are explicitly linked to the spectrum of security issues we face today. 67 Moreover, a recent study conducted by Melbourne University in 2014 claimed that the world might still be moving along the trajectory mapped out in 1972, and that economic and environmental collapse could happen before 2070. 68

There is a general assumption that the worst effects of these environmental trends will be for the most part experienced outside the western world. Even when western states are affected, it is assumed, rich countries will possess the financial means to weather this future storm. However, a recent report by Laybourn-Langton and colleagues challenges this simplistic assumption and points to the social and economic harm being caused globally by current forms of human-induced environmental change. These authors also demonstrate that no region of the world will be untouched by this phenomenon, and use the UK as a case-study to illustrate the point. In their view, the degradation of the environment will interact with existing political and economic trends to undermine the cohesion and internal stability of states across the globe. 69 Interestingly, the report's analysis of the challenges facing governments has not been contested, although their proposed solutions in terms of radical economic reform have been strongly challenged by economists. 70

Current trends suggest that a potential environmental crisis might run in parallel with a possible economic crisis. Ironically, the source of this predicament lies in potential problems generated by the fourth Industrial Revolution. Like the military, business is also fast approaching a time when machine intelligence can perform many of the functions hitherto carried out by humans in a range of occupations. As McAfee and Brynjolfson explain, innovation was hugely advantageous in those occupations which relied on physical labour, allowing new forms of economic activity and employment based on human cognitive abilities to develop. 71 However, this cognitive comparative advantage is now under threat, as computer algorithms have reached a point where they can outperform humans in many jobs. 72

As in the military domain, so in our economic and political affairs it is predicted that AI will precipitate a revolution. A PriceWaterhouseCooper report predicted that 38 per cent of all jobs in the United States are at high risk of automation by the early 2030s. 73 Most of these are routine occupations such as those of forklift drivers, factory workers and cashiers in retail and other service industries. This depressing analysis is supported by the Bank of England's estimate that up to 15 million jobs are at risk in the UK from increasingly sophisticated robots, and that their loss will serve to widen the gap between rich and poor. 74 Most worrying is the fact that, in the short term, the jobs most at risk are low-paid and low-skilled occupations, which are precisely the jobs the UK and US economies have been so successful in generating to create record levels of employment since the financial crash in 2008.

As in the past, those most affected by this change will be the economically least powerful sectors of society—the old, and unskilled and unorganized labour. Until now, the managerial and professional classes have been able to use their economic and political positions to protect themselves from the worst effects of such crises. 75 The big difference about this revolution is that AI is threatening traditional professional middle-class occupations. Any job that can be done via the application of pattern-searching algorithms will be vulnerable. This includes banking and finance, the law and even education. Daniela Russ has argued that humans need the personal touch in their day-to-day lives and that humans are therefore guaranteed to have a place in the job market. 76 Sadly, Harari challenges even this view, and claims machines can mimic empathy by monitoring blood pressure and other physical indicators in interactions between AI and humans. 77 A recent report by the Wall Street Journal supports this view. In their investigation of the use of AI in the provision of psychological therapy, they found people preferred the treatment offered by the AI precisely because it was a machine and so they did not feel judged. The system can also be configured to fit people's preferences, creating a 3D computer-generated image that is comforting and reassuring. 78

A significant limitation of AI and machine technology is that currently they cannot replicate the dexterity of humans in handling delicate objects, and this does leave a role for humans in the workplace. However, scientists in California are looking at the use of AI and machine technology as a way of addressing the acute labour shortages experienced in the fruit-picking industry; this includes the development of machines capable of deciding which fruit is ripe for picking, and doing so in a way that does not damage the produce during picking, processing or distribution. Given these developments, Harari's prediction for humans in the workplace is bleak. ‘In the twenty-first century we might witness the creation of a massive new unworking class: people devoid of any economic, political or even artistic value, who contribute nothing to the prosperity, power and glory of society.’ 79 The mass unemployment generated would be on an unprecedented scale and likely to precipitate instability and violence. 80

Further evidence to support the depressing scenario depicted here is provided by the former head of Google China, Dr Kai-Fu Lee, a man with decades of experience in the world of AI. In his view, AI ‘will wipe out billions of jobs up and down the economic ladder’. 81 A typical counter to this view is that AI will lead to the creation of new jobs and new careers; but, as Tegmark explains, the evidence does not support this claim. If we look back over the last century, what is clear is that ‘the vast majority of today's occupations predate the computer revolution. Most importantly, the new jobs created by computers did not generate a massive number of jobs.’ 82

What then are the political and security implications of this profound economic change in terms of war and the state? Although depressing, the scenario depicted above does not mean we are condemned to what Martin Wolf describes as a kind of ‘technological feudalism’. 83 As Gurr points out, past economic crises have provided political incentives for social reforms: for example, the New Deal in the United States, which represented a revolutionary change in how central government sought to manage the economy. 84

According to Wolf, three factors might determine how well the state deals with these challenges: first, the speed and severity of the transformation we are about to experience; second, whether the problem is temporary or likely to endure; and third, whether the resources are available to the state to mitigate the worst effects of these changes. In the past, western governments have deployed a range of policies to deal with recessions or, as in the 1970s, scarcity of resources such as oil. However, these macroeconomic policy responses operated on the assumption that such crises were temporary, and that economic growth would resume and normality be restored quickly if the right measures were in place. In contrast, the environmental crisis and the AI revolution are happening rapidly and both will be enduring features of economic and political life. In Wolf's view, this latest revolution will require a radical change in our attitude towards work and leisure, with the emphasis on the latter. He also believes we will need to redistribute wealth on a large scale. In the absence of work, the government might resort to providing a basic income for every adult, together with funds for education and training. The revenue to fund such a scheme could come from tax increases on pollution and other socially negative behaviours. In addition, intellectual property, which will become an important source of wealth, could also be taxed. 85

However, the introduction of these measures will not necessarily prevent a rise in politically motivated violence. As Gurr explains, recourse to political violence is caused primarily not by poverty but by relative deprivation. This is defined as ‘actors’ perception of discrepancy between their value expectations and their environment's apparent value capabilities'. 86 As such, it reflects the difference between what people believe they are legitimately entitled to and what they achieve, perceptions of which have become acute in the age of the smartphone. Relative deprivation applies to both the individual and the group. Seen in this light, the bright, shiny new world created by AI provides a potentially rich environment for relative deprivation—particularly if large swathes of the middle classes are frustrated in their ambitions and suffer a loss of status as a socio-economic group. 87 More worrying is that this technological and economic revolution will coincide with the global deterioration of the environment set out above, which also challenges the state.

Within this scenario, states in the western world will struggle just as much as states in the developing world. If the legitimacy of the state is measured in terms of its capacity to effectively administer a territory under its control, then the political context set out here poses a significant threat to this institution. The extraction of resources through taxation will prove extremely difficult as the tax base shrinks. This will affect the ability of the state to provide the public goods the population expects and requires. A weaker state, which lacks the resources and capacity to sustain the population, will also lack legitimacy; this could cause the social contract to break down and result in widespread violence. What, then, will the future grammar of war look like in this political and social context?

In this version of the future, the most fundamental aspect of the technology–war interaction will be the challenge to the state's retention of the monopoly of violence. Projections about the end of the state's monopoly on the use of force have been made before, but the current trajectory of technological change is making this threat more plausible, and bringing it closer. 88 This speculative line of enquiry was given substance in 1999 by two colonels in the Chinese People's Liberation Army, Qiao Lang and Wang Xiangsui. Their study was conceived mainly within the context of a future war between the United States and China, and so their thinking was developed within the setting of a state-based conflict. However, their central thesis is relevant here because they believed the world was living in an unprecedented age in terms of the speed and breadth of technological innovation. There are, they argued, so many essential technologies emerging that it is difficult to predict how these will combine, or what the effect of these combinations might be in military and political terms. Developments in biotechnology, materials technology, nanotechnology and, of course, the information revolution are creating new opportunities and ways of attacking other states. 89 An important observation made in Unrestricted warfare is that new technologies, which could be used as weapons, are increasingly part of our normal day-to-day lives. 90 In sum, the colonels identified a range of non-military means that are technically outside the state's control and that might allow a weaker actor to fight and defeat their more powerful adversary. The 20 years that have passed since first publication of Unrestricted warfare have demonstrated the prescience of the authors in respect of what are deemed to be new types of conflict today. For example, what they called ‘superterrorism war’ seemed to come to fruition on 9/11. We can see how state and non-state actors have exploited emerging everyday technologies that challenge powerful nation-states. Of great importance is the way in which groups such as ISIS and revisionist powers such as Russia have weaponized social media in their efforts to weaken those who oppose them. ISIS, indeed, claimed that media weapons could be more potent than atomic bombs. 91

It is believed that Russia is increasingly relying on non-military means to challenge the West. Not surprisingly, evidence is mounting that it influenced the outcome of the 2016 US presidential election. 92 This form of activity is now a persistent feature of the conflict spectrum and is practised by a variety of states. 93 In August 2018, Facebook closed 652 fake accounts and pages with ties to Russian and Iranian state-based organizations. In both cases, the objective appears to have been to influence domestic politics in the UK, the US, the Middle East and Latin America. Four campaigns were identified, three of which originated in Iran. 94 With over 2 billion accounts to police on Facebook, it is feared this practice will persist.

It is not only because of the blurring of the distinction between military and civilian that more technology is becoming more accessible. Moises Naim points to the falling cost of many technologies used in both defence and the civilian sector, which is making them accessible to weak states and violent non-state actors. 95 An excellent example of this trend can be seen in the domain of synthetic biology, a new field that combines the power of computing and biology to ‘design and engineer new biological parts, devices and systems and redesign existing ones for other purposes’. 96 In 2003, the Human Genome Project completed the first full sequencing of human DNA. The successful completion of this project took ten years and was the result of work done in over 160 laboratories, involving several thousand scientists and costing several billion dollars. It is now possible to buy a DNA sequencing device for several thousand dollars and sequence a person's genome in less than twenty-four hours. So steeply, in fact, have sequencing costs fallen that the industry is no longer profitable in the developed world and is now primarily conducted within China. By way of example of the potential threat posed by this new science, in 2005 scientists, worried about the possibility of another flu pandemic, recreated the Spanish flu virus which during and after 1918 killed 50 million people in two years. In 2011, scientists employed these techniques to manipulate the H5N1 bird flu virus and create a variation which could be spread from the avian to the human species. It is feared the technical bar to entry into this domain is now sufficiently low that it can be exploited for nefarious purposes by individuals or groups. 97 Precisely the same fears have been expressed about the cyber domain. According to one Israeli general, ‘cyber power gives the little guys the kind of ability that used to be confined to superpowers’. 98 In the future, we might even be able to make weapons via 3D printers. In theory, it is possible to build a handgun or even an assault rifle with this technology.

However, before concluding that the state is about to wither away, we need to remember that these technologies are still maturing. Therefore, whether or not advances in the cyber domain will undermine or reinforce the power of the state remains a contested point. As Betz points out, launching a successful attack against another state via this medium can be very costly. The Stuxnet computer virus, which was used to attack Iran's nuclear programme, was a very sophisticated piece of software developed by a dedicated team of specialists over a long period. The successful insertion of this virus also required high-grade intelligence on the Iranian nuclear programme. Consequently, the success of a cyber attack depends on a combination of capabilities, not just the development of a virus, and at the moment this puts the state at a considerable advantage. 99 A similar point can be made in the case of 3D printing: you need to do more than just download the code to print the weapon. You also need access to complicated and expensive computer-aided design software and a high-quality metal 3D printer capable of using steel, aluminium or nickel. Such a machine costs over US$100,000, which is nearly 60 times the price of a standard 3D printer which uses plastic. The latter has been used to print plastic guns, but these proved unreliable and likely to explode in the user's hand. 100

Finally, technology will also allow the state to attempt to counter internal threats to its authority. Stephen Graham notes that a significant trend in the war on terror has been the blurring between civilian and military applications of technologies dealing with control, surveillance, communications, simulation and targeting. The capability to exercise control via technologies which are intended to provide a service, such as parking and congestion charging, has increased dramatically the opportunities to conduct electronic surveillance for a host of other purposes. 101

‘War made the state, and the state made war’ is a maxim that has shaped our historical understanding of this relationship. In the West, the general absence of major war since 1945 changed the war–state relationship, and there is now a consensus that each is significantly less important to the other. My aim in this article has been to provide a more nuanced understanding of the war–state relationship that emerged after 1945.

The existence of nuclear arsenals made total or modern war obsolete. Within this strategic setting a new form of war emerged. Post-modern war did not require the state to mobilize its entire population and economy to fight a life-or-death struggle against other states, largely because its principal focus was on devising ways to use military power to deter war or devising new means to attack the enemy's moral rather than its physical power. As a result, the logic of war transcended simple notions of battle and victory. War between the Great Powers and their allies tended to be confined to the grey zone between peace and open violence. However, the drive for technological innovation, caused by the peculiarities of the Cold War, ensured that war and the state remained strongly connected, as only the state had the capacity to stimulate research and development on the scale required to ensure the efficacy of strategic deterrence.

The drift towards more capital-intensive modes of warfare continued in the post-Cold War era. Technology gave western governments the internal independence to prosecute wars because they demanded little sacrifice from society. In a period characterized by a plethora of politically unpopular ‘wars of choice’, this allowed states to employ force in pursuit of even vague, value-based objectives. Most importantly, these new means of war enabled nuclear-armed states to continue fighting each other in the space between war and peace using both military and non-military means. We have seen evidence of this in Ukraine and in the South China Sea.

This corporatist alliance between the state and private industry had impacts on politics, the economy and society, but in ways that did not conform with recognized patterns of behaviour associated with modern war. This is possibly why the war–state relationship since 1945 is viewed in terms of decline. However, the persistent debate about the existence of the MIC, admittedly a crude construct, is evidence of the survival of the war–state relationship and of its wider impact. The clearest evidence of this can be seen in the role played by military research in causing and accelerating scientific invention, which has been instrumental in bringing about dramatic economic, political and social change in contemporary western society. Most important of all are the non-military means created by military research which are now being exploited by both state and non-state actors. As Graham explains, western scientific research has gone through a cycle from defence to the commercial world and back again:

Hence, technologies with military origins—refracted through the vast worlds of civilian research, development and application that help constitute high tech economies, societies and cultures—are now being reappropriated as the bases for new architectures of militarized control, tracking, surveillance, targeting and killing. 102

Looking to the future, the likelihood is that war will continue to have a significant impact on the state. Commentators today note with concern the ways in which technology is undermining the state's monopoly on the use of force as the technical and fiscal barriers to weapons production fall. However, capability should not be equated with intent, and people rarely decide to initiate violence without cause. For this reason, it is important to reflect on the political context, which will provide the policy logic for war in the future. The most important potential effect of projected technological change is transformation of the means of production, which could trigger huge economic and political turmoil in the West. If the fourth Industrial Revolution proves to be as disruptive as is predicted, this will lead to increased instability and possibly violence. These developments will weaken the state and damage its legitimacy as it struggles to fulfil the needs of its population. Western states may be able to deal with this transformation; but if it coincides with the predicted deterioration in the global environment, the institution of the state will struggle to bear the combined weight of the demands imposed on it. Under these circumstances, civil conflict might result. The irony here is that the technological preparation for war after 1945 sowed the seeds of the state's demise, playing an important role in creating the conditions that might cause a future existential crisis of the western state. Not only has that technological advance created the conditions for war, especially civil war, it has compounded this threat by democratizing the means of violence and empowering non-state actors. In the future, then, the war–state relationship could take an unexpected turn; and war might actually precipitate the unmaking of the state.

See Martin van Creveld, The rise and decline of the state (Cambridge: Cambridge University Press, 1999), pp. 336–414; Michael Mann, The sources of social power , vol. 4: Globalizations, 1945–2011 (Cambridge: Cambridge University Press, 2013), p. 432; Philip Bobbit, The shield of Achilles (London: Penguin, 2002), pp. 214–19; Charles Tilly, ‘Warmaking and state making as organized crime’, in Peter Evans, Dietrich Rueschmeyer and Theda Skocpol, eds, Bringing the state back in: strategies of analysis in current research (Cambridge: Cambridge University Press, 1985), pp. 169–86.

Lawrence Freedman, ‘The rise and fall of Great Power wars’, International Affairs 95: 1, Jan. 2019, pp. 101–18.

See Martin van Creveld, Technology and war from 2000 Bc to the present (New York: Free Press, 1989); Andrew F. Krepinevich, ‘Cavalry to the computer: the pattern of military revolutions’, The National Interest , no. 37, Fall 1994, pp. 31–43.

See Michael Howard, War in European history (Oxford: Oxford University Press, 1977); Hans Delbruck, The history of the art of war , vols 1–4 (Lincoln, NE: University of Nebraska Press, 1990).

William McNeill, The pursuit of power: technology, armed force, and society (Chicago: Chicago University Press, 1982); Maurice Pearton, The knowledgeable state: diplomacy, war and technology since 1830 (London: Burnett, 1982).

Mariana Mazzucato, The entrepreneurial state: debunking the public versus private sector (London: Penguin Random House, 2013), pp. 92–119.

See Christopher Coker, Rebooting Clausewitz on war in the 21st century (London: Hurst, 2017); Martin van Creveld, More on war (Oxford: Oxford University Press, 2017).

See Tilly, ‘War making and state making as organized crime’, pp. 170–86.

See Macgregor Knox and Williamson Murray, eds, The dynamics of military revolution 1300–2050 (Cambridge: Cambridge University Press, 2001).

Samuel P. Huntington, ‘Arms races: prerequisites and results’, in Richard K. Betts, ed., Conflict after the war on terror (London: Pearson Longman, 2005), p. 361.

See Kelik Mumatz, Schumpeter innovation and growth: long cycle dynamics in post World War Two American manufacturing industries (Aldershot: Ashgate, 2003); Paul Mason, Postcapitalism: a guide to our future (London: Allen Lane, 2015), p. 33.

Joseph Schumpeter, Capitalism, socialism and democracy (London: Allen & Unwin, 1943), p. 84.

Solly Zuckerman, Scientists and war (London: Hamish Hamilton, 1966), pp. 28–9.

William O'Neill, The pursuit of power (Oxford: Blackwell, 1983), pp. 280–87.

Guy Hartcup, The challenge of war: scientific contributions to World War Two (Newton Abbott: David Charles, 1970), p. 21.

See David Wrigley, ‘The Fabian Society and the South African War, 1899–1902’, South African Historical Journal 10: 1, 1978, pp. 65–78.

Chris Hables Gray, Postmodern war: the new politics of conflict (London: Routledge, 1997), pp. 128–49.

Hables Gray, Postmodern war , p. 22.

For studies that use the term differently, see Mark Duffield, ‘Post modern conflict: warlords, post adjustment states and private protection’, Civil Wars 1: 1, Spring 1998, pp. 65–102; Mary Kaldor, New and old wars (Cambridge: Polity, 1999).

Kalevi Holsti, Peace and war: armed conflicts and international order 1648–1989 (Cambridge: Cambridge University Press, 1991), pp. 270–71.

See Stephen Cimbala, Clausewitz and escalation: classical perspectives on nuclear strategy (Abingdon: Routledge, 2012).

Carl von Clausewitz, On war (Princeton: Princeton University Press, 1976), p. 77.

Clausewitz, On war , p. 87.

Ofer Fridman, Russian ‘hybrid warfare’: resurgence and politicisation (London: Hurst, 2018), p. 91.

For more on the rituals of violence in war, see Christopher Cramer, Civil war is not a stupid thing: accounting for violence in developing countries (London: Hurst, 2006), pp. 1–20.

Barry Buzan, Military technology and international relations (London: Macmillan, 1987), p. 216.

See J. Lyall and I. Wilson, ‘Rage of the machines; explaining outcomes in counterinsurgency wars’, International Organisation 63: 1, Winter 2010/11, pp. 67–106.

Warren A. Chin, British weapons acquisition policy and the futility of reform (Aldershot: Ashgate, 2004), pp. 43–69.

Dwight D. Eisenhower, ‘Farewell radio and television address to the American people’, 17 Jan. 1961, https://eisenhower.archives.gov/all_about_ike/speeches/farewell_address.pdf .

Harold Laswell, Essays on the garrison state , ed. Jay Stanley (New Brunswick, NJ: Transaction, 1997), pp. 77–116.

See Samuel Huntington, The soldier and the state (Cambridge, MA: Harvard University Press, 1985).

See Mary Kaldor, The baroque arsenal (London: Deutsch, 1982).

Stephen Graham, Cities under siege: the new military urbanism (London: Verso, 2010), Kindle edn, loc. 2069, chapter 3: ‘The new military urbanism’, section: ‘Tracking citizen–consumer–soldier’.

Vincent P. Luchsinger and John Van Blois, ‘Spin-offs from military technology: past and future’, Journal of Technology Management 4: 1, 1989, pp. 21–9.

See P. W. Singer, Corporate warriors: the rise of the privatised military industry (Ithaca, NY: Cornell University Press, 2003), p. 38.

Lawrence Freedman, ‘The changing forms of military conflict’, Survival 40: 4, Winter 1998–9, pp. 39–56.

See Alvin Toffler and Heidi Toffler, War and anti war: survival at the dawn of the 21st century (London: Little, Brown, 1993).

D. L. I. Kirkpatrick, ‘The rising unit cost of defence equipment: the reasons and the results’, Defence and Peace Economics 6: 4, 1995, pp. 263–88.

Martin Shaw, The new western way of war (Cambridge: Polity, 2004), pp. 29–41.

Bobbit, The shield of Achilles , pp. 301–303.

David Keen, Endless war: hidden functions of the war on terror (London: Pluto, 2006), pp. 51–83.

Paul Rogers, Irregular warfare: ISIS and the new threat from the margins (London: Tauris, 2016), Kindle edn, loc. 2391–6, chapter 6: ‘Irregular war’.

See Grégoire Chamayou, Drone theory (London: Penguin, 2015).

See Robert Kaplan, The revenge of geography: what the map tells us about coming conflicts (New York: Random House, 2012).

Christopher Coker, The future of war (Oxford: Blackwell, 2004).

Lawrence Freedman, The future of war: a history (London: Allen Lane, 2017), p. xviii; Damien van Puyvelde, Stephen Coulthardt and M. Shahmir Hossain, ‘Beyond the buzzword: big data and national security decision-making’, International Affairs 93: 6, Nov. 2017, pp. 1397–416.

Elliot Cohen, ‘Change and transformation in military affairs’, Journal of Strategic Studies 27: 3, 2004, p. 396.

Mann, Globalizations, 1945–2011 , p. 432.

UK Ministry of Defence, Global strategic trends—out to 2045 (London: The Stationery Office, 2014).

Colin Gray, Another bloody century (London: Weidenfeld & Nicolson, 2005), 39.

Paul K. Davis, Lessons from RAND's work on planning under uncertainty for national security (Santa Monica, CA: RAND, 2012), p. 5.

Klaus Schwab, The fourth Industrial Revolution (London: Penguin Random House, 2017), p. 7.

Max Tegmark, Life 3.0: being human in the age of artificial intelligence (London: Penguin Random House, 2017), Kindle edn, p. 37.

John Thornhill, ‘AI: the new frontier’, ‘Big picture podcast’, Financial Times , 4 July 2018, https:podcasts.apple.com>podcast>ft .

Schwab, Fourth Industrial Revolution , p. 8.

See John Mearsheimer, The tragedy of Great Power politics (London: Norton, 2001).

Robert Latiff, Future war: preparing for the new global battlefield (New York: Knopf, 2017).

Rob Davies, ‘UK unveils new Tempest fighter to replace Typhoon’, Guardian , 16 July 2018.

Bob Work, Deputy Secretary of Defense, ‘Third Offset strategy bolsters America's military deterrence’, US Dept of Defense, 31 Oct. 2016, https://www.defense.gov/News/Article/Article/991434/deputy-secretary-third-offset-strategy-bolsters-americas-military-deterrence/ . (Unless otherwise noted at point of citation, all URLs cited in this article were accessible on 20 May 2019.)

Franz-Stefan Gady, ‘New US defence budget: £18 billion for Third Offset’, The Diplomat , 10 Feb. 2016, https://thediplomat.com/2016/02/new-us-defense-budget-18-billion-for-third-offset-strategy/ .

Kai-Fu Lee, AI super-powers: China, Silicon Valley, and the new world order (New York: Houghton Mifflin Harcourt, 2018), p. 19. See also Evan Feigenbaum, China's techno warriors (Stanford: Stanford University Press, 2003).

Adam Segal, ‘When China rules the Web’, Foreign Affairs 97: 5, Sept.–Oct. 2018, p. 12.

Segal, ‘When China rules the Web’.

Kai-Fu Lee, AI super-powers , p. 4.

Donella Meadows, Dennis L. Meadows, J⊘rgen Randers and William W. Behrens III, The limits to growth: a report for the Club of Rome's project on the predicament of mankind (New York: Potomac Associates–Universe Books, 1972).

See David Kilcullen, Out of the mountains: the coming age of the urban guerrilla (Oxford: Oxford University Press, 2013).

Graham Turner, Is global collapse imminent? , research paper no. 4 (Melbourne: University of Melbourne, Sustainable Society Institute, Aug. 2014).

Laurie Laybourn-Langton, Lesley Rankin and Darren Baxter, This is a crisis: facing up to the age of environmental breakdown (London: Institute for Public Policy Research, Feb. 2019), p. 5.

Matthew Green, ‘New economics—the way to save the planet?’, Reuters, 8 May 2019, https://uk.reuters.com/article/uk-climatechange-extinction/new-economics-the-way-to-save-the-planet-idUKKCN1SE2CU .

See Andrew McAfee and Erik Brynjolfson, The second machine age: work, progress in times of brilliant technologies (New York: Norton, 2014).

Yuval Noah Harari, Homo deus: a brief history of tomorrow (London: Vintage, 2017), p. 363.

PWC report, Will robots really steal our jobs? How will automation impact on jobs , https://www.pwc.co.uk/economic-services/assets/international-impact-of-automation-feb-2018.pdf .

Larry Elliot, ‘Robots threaten 15m jobs, says Bank of England chief economist’, Guardian , 12 Nov. 2015.

Ted Robert Gurr, Political rebellion: causes, outcomes and alternatives (Abingdon: Routledge, 2015), p. 58.

Daniela Russ, ‘The robots are coming’, Foreign Affairs 94: 3, June–July 2015, pp. 2–6.

Harari, Homo deus , p. 370.

‘The future of everything: how AI is augmenting therapy’, podcast, Wall Street Journal , https://www.wsj.com/podcasts/wsj-the-future-of-everything/how-ai-is-augmenting-therapy/810a7099-0cc3-4e03-8148-dd87c3673152 .

Harari, Homo deus , p. 379.

Kevin Drum, ‘Tech world welcome to the digital revolution’, Foreign Affairs 97: 4, July–Aug. 2018, p. 47.

Kai-Fu Lee, AI super-powers , p. 19.

Tegmark, Life 3.0 , p. 103.

Martin Wolf, ‘Same as it ever was’, Foreign Affairs 94: 4, 2015, p. 18.

Gurr, Political rebellion , p. 59.

Wolf, ‘Same as it ever was’, p. 22.

Gurr, Political rebellion , p. 15.

Gurr, Political rebellion , p. 16.

See Martin van Creveld, The transformation of war (New York: Free Press, 1991).

Qiao Lang and Wang Xiangsui, Unrestricted warfare (Marina Del Rey, CA: Shadow Lawn Press, 2017; first publ. 1999), Kindle edn, p. 5

Qiao Lang and Wang Xiangsui, Unrestricted warfare , p. 48.

P. W. Singer and Emerson T. Brooking, Like war: the weaponization of social media (Boston: Houghton Mifflin Harcourt, 2018), pp. 151–4.

Karen Kornbluh, ‘The internet's lost promise and how America can restore it’, Foreign Affairs 97: 5, Sept.–Oct. 2018, p. 33; Mikael Wigell, ‘Hybrid interference as a wedge strategy: a theory of external interference’, International Affairs 95: 2, March 2019, pp. 255–76; Yevgeniy Golovchenko, Mareike Martmann and Rebecca Adler-Nissen, ‘State, media and civil society in the information warfare over Ukraine’, International Affairs 94: 5, Sept. 2018, pp. 975–94.

Rory Cormac and Richard J. Aldrich, ‘Grey is the new black: covert action and implausible deniability’, International Affairs 94: 3, May 2018, pp. 477–94.

Oliver Solon, ‘Facebook removes 652 fake accounts and pages meant to influence world politics’, Guardian , 22 Aug. 2018.

Moises Naim, The end of power (New York: Basic Books, 2013), Kindle edn, loc. 2579.

Ronald K. Noble, ‘Keeping science in the right hands’, Foreign Affairs 92: 6, Nov.–Dec. 2013, p. 47.

Laurie Garrett, ‘Biology's brave new world: the promise and perils of the syn bio revolution’, Foreign Affairs 92: 6, Nov.–Dec. 2013, pp. 28–46.

Cited in Naim, The end of power , loc. 2571.

David Betz, ‘Cyberpower in strategic affairs’, Journal of Strategic Studies 35: 5, 2012, p. 695.

Dan Tynan, ‘“I wouldn't waste my time”: firearms experts dismiss flimsy 3D-printed guns’, Guardian , 1 Aug. 2018.

Graham, Cities under siege , loc. 2011, chapter 3: ‘The military urbanism’, section: ‘Tracking: citizen–consumer–soldier’.

Graham, Cities under siege , loc. 2099, chapter 3: ‘The military urbanism’, section: ‘Tracking: citizen–consumer–soldier’.

Author notes

Month: Total Views:
June 2019 49
July 2019 1,550
August 2019 718
September 2019 946
October 2019 1,595
November 2019 1,934
December 2019 2,067
January 2020 1,987
February 2020 2,561
March 2020 2,562
April 2020 3,753
May 2020 2,031
June 2020 2,180
July 2020 2,230
August 2020 2,120
September 2020 2,206
October 2020 3,133
November 2020 3,594
December 2020 3,277
January 2021 2,816
February 2021 3,136
March 2021 5,286
April 2021 3,961
May 2021 4,133
June 2021 2,960
July 2021 1,960
August 2021 2,424
September 2021 2,707
October 2021 3,719
November 2021 3,449
December 2021 2,918
January 2022 2,463
February 2022 3,236
March 2022 5,409
April 2022 4,276
May 2022 4,601
June 2022 2,715
July 2022 1,691
August 2022 1,857
September 2022 3,117
October 2022 3,743
November 2022 3,732
December 2022 2,485
January 2023 3,004
February 2023 3,520
March 2023 4,414
April 2023 3,915
May 2023 4,090
June 2023 2,607
July 2023 2,485
August 2023 2,409
September 2023 3,233
October 2023 3,843
November 2023 3,712
December 2023 2,646
January 2024 3,131
February 2024 3,154
March 2024 5,010
April 2024 3,687
May 2024 3,049
June 2024 1,416
July 2024 1,285
August 2024 1,895
September 2024 378

Email alerts

Citing articles via.

  • Recommend to Your Librarian
  • Advertising and Corporate Services

Affiliations

  • Online ISSN 1468-2346
  • Print ISSN 0020-5850
  • Copyright © 2024 The Royal Institute of International Affairs
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

How Innovation and Technology Makes Life Easier Essay

  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment

Introduction

Works cited.

For the people living in the age of highly developed technological progress, it is rather hard to imagine life without all of the existing innovations. Contemporary society can hardly function on the proper level without machines. It is difficult to overestimate the input of modern technologies in making the life of common people easier and happier.

Some people believe that modern inventions contribute to our laziness and inability to communicate. In my opinion, the negative effects of technology overuse really exist, but they are mostly caused by our lack of education and irresponsibility. They can be easily avoided if people behave properly. The machines were created for making our life easy, and it is not their fault if we use them unduly.

Without innovations, our world would look different. It goes without saying that the invention of such widely used devices as the washing machine or a vacuum cleaner gave more free time to common people. However, the influence of technologies is not limited to this side of everyday life. On the contrary, all of the essential parts of the life of our society are based on the use of technology.

The technologies made the studying process easy and accessible for people with any income and location. Owing to modern gadgets and Internet, contemporary students have the possibility to find any scholarly source needed at any time. Without innovations, the work of firefighters, police officers, and rescuers would be much less productive. Thanks to the machines, our healthcare system is constantly making progress in finding solutions for different health problems. Without technologies, the level of medical services would be much lower. Besides, the adoption of technologies maximizes the independence of older adults and makes their life easier and safer (Adams et al. 1718). The use of technologies results in millions of lives and loads of time saved.

The efficacious use of technologies in all spheres of life is directly associated with the level of development of a country. It is a well-known fact that a permanent introduction of innovative technologies into the functioning of different systems results in providing better services and increasing the perception of quality of life. It leads to a higher standard of life for citizens and the elaboration of the country’s reputation all over the world. Therefore, the implantation of innovations is one of the main characteristics of the developed countries, recognized as leaders in the world community.

The United Arab Emirates is a country that has gained the reputation of the leader in implementing up-to-date innovations into life. His Highness Sheikh Khalifa bin Zayed Al Nahyan, the president of the UAE, has announced 2015 to be the year of innovations in the country. Within the framework of the project, plans for a major museum of the future in Dubai have been launched. The museum “will produce futuristic inventions” and support the UAE in reaching the object of being the most path-breaking country in the world (Sophia par. 2). Different sections will present the newest inventions and demonstrate simulations, enabling the visitors to see the future with the use of 3D printing techniques (Sophia par. 4). Besides, the museum will unite the most prominent specialists under one roof. It will create an opportunity for realizing the whole potential of the best inventors in constructing futuristic prototypes. It seems that this incredible institution will attract much attention from researchers and common people from the whole world.

Technologies and innovations are the main engines driving our society towards a happier future. The most developed countries support inventions and embody them into life. The UAE remains the leader in implementing innovations and strikes the world with its new projects.

Adams, Anne, Julie Boron, Neil Charness, Sara Czaja, Katinka Dijkstra, Cara Bailey Fausset, Arthur Fisk, Tracy Mitzner, Wendy Rogers, and Joseph Sharit. “Older Adults Talk Technology: Technology Usage and Attitudes.” Computers in Human Behavior 26.6 (2010): 1710-1721. Print.

Sophia, Mary. Sheikh Mohammed Launches Museum of the Future in Dubai . 2015. Web.

  • Vision of Constantine from the Stavelot Triptych
  • The Emergence of the Auditorium Theater
  • How to Live a Happy Life: 101 Ways to Be Happier
  • Pets’ Adoption: Cats Make This Life Happier
  • The Concept of Afrofuturism
  • Principles of Production by Bauhaus
  • Traditional Japanese Design Elements
  • Colosseum in Rome as an Architectural Monument
  • Compare and Contrast Romanesque v. Gothic Style
  • Federation Architecture in Australia.
  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2022, January 26). How Innovation and Technology Makes Life Easier. https://ivypanda.com/essays/how-innovation-and-technology-makes-life-easier/

"How Innovation and Technology Makes Life Easier." IvyPanda , 26 Jan. 2022, ivypanda.com/essays/how-innovation-and-technology-makes-life-easier/.

IvyPanda . (2022) 'How Innovation and Technology Makes Life Easier'. 26 January.

IvyPanda . 2022. "How Innovation and Technology Makes Life Easier." January 26, 2022. https://ivypanda.com/essays/how-innovation-and-technology-makes-life-easier/.

1. IvyPanda . "How Innovation and Technology Makes Life Easier." January 26, 2022. https://ivypanda.com/essays/how-innovation-and-technology-makes-life-easier/.

Bibliography

IvyPanda . "How Innovation and Technology Makes Life Easier." January 26, 2022. https://ivypanda.com/essays/how-innovation-and-technology-makes-life-easier/.

COMMENTS

  1. How Is Technology Changing the World, and How Should the World Change

    How Is Technology Changing the World, and How Should the ...

  2. The Impact of Technology on Modern Society: A Comprehensive ...

    The Impact of Technology on Modern Society

  3. Modern Technology's Impact on Society Essay

    Introduction. Modern technology has changed the world beyond recognition. Thanks to technology in the twentieth and twenty-first centuries, advances have been made that have revolutionized our lives. Modern man can hardly imagine his life without machines. Every day, new devices either appear, or existing ones are improved.

  4. Technology and Its Impact in the World Essay

    Technology and Its Impact in the World Essay

  5. Essay on Modern Technology

    In conclusion, modern technology, despite its potential drawbacks, is an integral part of our lives. It has the power to drive societal change, foster economic growth, and enhance the quality of life. As we navigate the digital age, it is crucial to strike a balance between leveraging technology and mitigating its adverse impacts.

  6. Here's how technology has changed the world since 2000

    Here's how technology has changed the world since 2000

  7. How artificial intelligence is transforming the world

    How artificial intelligence is transforming the world

  8. Technology in modern world

    Technology in Modern World Analytical Essay. The article explores the theory behind singularity in relation to the rapid advances in technology being experienced in the modern world. One of the broad assumptions of singularity is that there will be a very thin line between nature and technology within a few decades to come.

  9. A timeline of technology transformation: How has the pace changed

    A timeline of technology transformation: How has the pace ...

  10. 17 ways technology could change the world by 2027

    17 ways technology could change the world by 2027

  11. The Impact of Digital Technologies

    The Impact of Digital Technologies

  12. How Technology Has Changed Our Lives

    How Technology Has Changed Our Lives: [Essay Example ...

  13. A Journey Through the Evolution: History of Science and Technology

    The history of science and technology is a fascinatingly complex web of discoveries, inventions, and innovations that have shaped the modern world. From ancient times to the present day, humans have been on an unceasing quest for knowledge and ways to apply it in order to improve their lives. This journey has taken us from rudimentary tools ...

  14. (PDF) Impact of Technology on Modern Society—A ...

    (PDF) Impact of Technology on Modern Society—A ...

  15. Technological Advancement Essay

    Introduction. Technological advancement has taken major strides in bringing liberation to the divergent human wants and gratifications. After keen observation, I have come to realize that technological advancement plays a critical role in solving the major crisis of food shortages in the modern world. In the state of Virginia during the 17th ...

  16. The Modern World and STS

    Introduction. In the study of science and technology in society, the modern world, spanning from the 1940s to the present day, is an overwhelming, yet enriching period to study. Although the 1940s and the 2020s are both considered modern, the average person today would most likely find himself or herself living a very different life in the 1940s.

  17. The Evolution of Technology: [Essay Example], 640 words

    The Evolution of Technology: [Essay Example], 640 words ...

  18. 200-500 Word Example Essays about Technology

    200-500 Word Example Essays about Technology

  19. The Role of Technology in Modern Education

    Body Paragraph 2: Enhancing Engagement and Collaboration. Technology also plays a crucial role in enhancing student engagement and collaboration. Interactive tools such as multimedia presentations, virtual simulations, and gamified learning experiences make education more engaging and enjoyable.

  20. Technology, war and the state: past, present and future

    Technology, war and the state: past, present and future

  21. How Does Technology Affect Our Daily Lives? Essay

    How Technology Affects Our Lives - Essay

  22. In modern world technology develops dramatically

    With the technology's significant development, some individuals believe that the development can contribute to traditional culture becoming better. However, other individuals' stance that this kind of development could contribute to traditional culture trapping into extinction process | Band: 7

  23. How Innovation and Technology Makes Life Easier Essay

    Besides, the adoption of technologies maximizes the independence of older adults and makes their life easier and safer (Adams et al. 1718). The use of technologies results in millions of lives and loads of time saved. The efficacious use of technologies in all spheres of life is directly associated with the level of development of a country.