Download Ascent Look Out 2016+
Technologies that will Impact your Business

The Tech trends will help you understand the possibilities that lie ahead. Learn more about the new and emerging technologies that form the basis for tomorrow’s leading edge solutions. Doing so will help you come up with novel ideas that match the opportunities and threats currently surfacing.

Technologies in the Digital Journey
Technology x y x (phone) y (phone) Impact Range Section Keys Description
3D Printing 50 11 70 11 Early Adoption 2018 Transformational Concept

Definition
Also referred to as additive manufacturing, 3D printing is a manufacturing approach that materializes 3D objects from virtual designs created using CAD (Computer Added Aided Design) programs. The size of objects that can be printed extends from nanoscale to buildings. To create parts the 3D printer superposes layer upon layer of material using various additive processes until the entire object is created. Each layer is effectively a thinly sliced horizontal cross-section of the eventual object.

Applications
• Rapid prototyping — making fabricating a part or model, often designed using CAD tools, quick and easy
• Rapid manufacturing — inexpensive production of a small number of parts
• Mass customization — manufacturing unique objects personalized directly by users through a web-based interface
• Maintenance — creating parts for vehicles, aircraft, machinery and more
• Casting — creating foundry patterns for casting parts
• Retail — allowing consumers to purchase and download a product design to print at home
• Healthcare — creating bespoke replacement knees, hips, ears, blood vessels, heart valves and other body parts
• Pharma — creating personalized pills

Trajectory

Impact
• Alters the product supply chain, eliminating the need for both component and end product transportation
• May spawn a new wave of manufacturing relocation across the globe as components can be created anywhere
• Simplifies customization while reducing material waste and accelerating lead times
• Allows vehicle and aircraft manufacturers to create lightweight parts to help improve fuel efficiency
• Helps reduce the cost of cast parts while enhancing precision
• Improves after sales maintenance by making replacements immediately available
• Lowers the investment barrier for people needing to manufacture small series of objects
• Allows individuals to design and share objects in the same way they share content within the Creative Commons movement or software pieces in the open source movement

Evolution
• 3D printing was initially dedicated to prototyping in the manufacturing industry during the early stages of product development.
• A drop in the price of 3D printers, the appearance of open hardware initiatives and the general ‘Do It Yourself’ movement spread 3D printing to the open source and ‘makers’ communities.
• CAD object design libraries are starting to appear, allowing people to share, customize and print their own objects at home.
• Amazon and other players have opened a 3D printing store online with commercial 3D printers and scanners, printer filaments, CAD software and more.
• Advances in 3D printing means composite objects made of multiple materials can also be printed.
• Research into new materials with high mechanical properties, such as steel or titanium, has opened new uses in new sectors, such as aerospace where SpaceX created the first 3D printed rocket parts.
• Progress is also being made in printing bio-materials as well as other materials such as concrete, glass, plastic, metal, ceramic and foodstuffs.
• Progress is also being made toward highvolume printing.
• Complementary 3D scanning technology, which collects data on shape and size of objects, is making it easier to create initial designs.

Issues
• 3D printing systems will need to be linked to current manufacturing systems to ensure end-to-end visibility of the product lifecycle maintained.
• Issues may arise relating to the intellectual property of models and CAD designs.
• 3D printing could mean countries’ international trade strategies for goods become difficult to enforce.
• Tracing the use of raw materials in specific value chains will likely become more complex with 3D printing.

5G 33 32 63 26 Emerging 2019+ High Concept

Definition
5G represents the next generation of communication networks and services. It is not intended to be an evolution of legacy communication technologies, but a novel approach for fulfilling the requirements of future applications and scenarios. As such, legacy 4th generation LTE technologies continue to evolve in parallel.

Applications
• Connecting trillions of smart devices — on the Internet of Things (IoT)
• Providing massive broadband access — for high quality TV or video streaming to mobile devices, video conferencing and other applications
• Enabling extreme real-time applications — including augmented reality
• Delivering ultra-reliable applications — including e-health and energy services, manufacturing control and connected vehicles
• Providing life-line communications — including emergency calls during a natural disaster
• Enabling vehicle communications — improving traffic safety, assisting drivers with real-time traffic conditions, enhancing vehicle reliability and more

Trajectory

Impact
• Offers high levels of network flexibility and dynamic network reconfiguration based on analytics enabled by Software- Defined Networking (SDN) and Network Function Virtualization (NFV)
• Delivers higher levels of security and improved user privacy control
• Expected to cut network latency to around one to two milliseconds, enabling innovations such as the connected car
• Expected to improve wireless capacity 1,000-fold and, as such, deliver the capacity needed for mobile video streaming, video conferencing and more
• Projected to save up to 90 percent on energy, mainly from the radio access network
• Forecast to connect more than seven trillion wireless devices serving over seven billion people in the IoT
• Accelerates new service time-to-market, potentially reducing the average creation time cycle from 90 hours to 90 minutes

Evolution
• Current research is focused on the 5G network’s architecture, functionality and capabilities.
• First 5G deployments are expected around 2020.
• In the meantime, legacy 4th generation LTE technologies are continuing to evolve, offering a fully IP-based, integrated system providing high speeds indoors and outdoors, with premium quality and high security.
• LTE-A (LTE Advanced), for example, offers increased speeds, performance and capacity along with higher densities than its predecessor.

Issues
• Various regions around the world are currently competing to lead 5G standardization activities.
• 5G must accommodate a wide range of use cases from diverse verticals, each with advanced requirements around latency, resilience, coverage and bandwidth.
• These use cases need to be considered in the early phases of the standardization process and, as such, non-Telco companies (automotive, IT services, consumer goods, health and others) need to be included too.
• 5G is a combination of diverse technologies, the maturity of which will be varied when 5G is first deployed.

Advanced Data Visualization 89 13 79 43 Mainstream 2016 Medium  Concept

Definition
As the volume of data grows, we needways to easily grasp its implication.Visual representations (data visualization technologies or dataviz) are one of the more natural ways to understand patterns and relationships in complex data sets. Many solutions also enable interaction with these data sets (manipulating the data and changing viewpoints, for example), allowing us to explore them in more depth, reveal hidden relationships, find relevance among a large number of variables and more. We also need a means to explore the massive volumes of near instantaneous data provided by our increasingly connected world. With a growing number of data sets no longer static, advanced dataviz solutions enable the visualization and exploration of rapidly-evolving real-time data sets.
In some cases, visualization can go beyond the 2D screen, providing a 3D representation within augmented and/or reality environments or even immersive ‘caves’.

Applications
• Scientific computing
• Engineering
• Finance
• Health
• Pharmaceuticals
• Big Data projects
• Social network diagrams
• Geographical diagrams

Trajectory

Impact
• Improves decision-making, particularly in enterprises.
• Extends the work done by the Data Scientist.

Evolution
• Initially, visualizations were limited to specific types of business charts provided by productivity and Business Intelligence (BI) solutions.
• Big Data increased the need for intuitive, visual ways of traversing content. As such, many of today’s BI products now include advanced data visualization capabilities.
• Open source data visualization tools are progressing quickly, but often basic and less user-friendly.
• Data visualization could evolve significantly with advances in human computer interfaces. Immersive systems that include advanced 3D graphics, for instance, are already being employed in the astronomy and health science fields as well as for analysis of complex economic data, such as on the stock market.
• In time, advanced data visualization tools will be democratized through mainstream solutions.
• Data sets are no longer static and, as such, being able to visualize and explore real-time (or quickly evolving) data sets is useful and sometimes necessary.
• The paradigm of the data warehouse is progressively becoming obsolete: as the volume of available data increases, rather than digging into three years, three months or even three days of historical data, we will need to explore massive amounts of nearly instantaneous data.

Issues
• High interactivity visualization can be quite complex, demanding very specific knowledge.
• Even as tooling improves, data visualization is still relatively low-level and often requires some degree of coding.
• Badly used or understood, visualization can provide incorrect outcomes and misdirection.

Advanced Robotics 20 7 34 12 Emerging 2019+ Transformational Concept

Definition
The field of robotics is gathering a lot of momentum, thanks to advances in artificial intelligence (AI) and computer vision (artificial vision computer systems implemented in software and/or hardware that are able to perceive, process and understand visual data such as images and videos). Today’s advanced ‘intelligent’ robots are autonomous and able to make informed decisions around their own actions. They’re also increasingly connected able to interact with their environment and other smart machines as part of the Internet of Things. Not necessarily humanoid, today’s robots are mostly machines or even vehicles, such as autonomous cars.

Applications
• Industrial processes — such as for repetitive tasks or precise guidance
• Services and knowledge workers — including at work, in public or in hazardous environments
• Surveillance — particularly in harsh or dangerous environments
• Fire prevention — including detection and extinguishing
• Industrial inspection — such as network repairs
• Cleaning, sorting and delivery — of mail, for example
• Robotic doctors — allowing doctors to diagnose, and even operate on patients from a remote location
• Care — including of elderly people
• Autonomous vehicles — such as selfdriving cars
• Spacecraft — including planetary navigation
• Agriculture — including sheep-shearing
• Social, entertainment, sports or even teaching

Trajectory

Impact
• Improve safety by replacing workers in dangerous activities, such as fires or nuclear power plant accidents
• Increase efficiency with AI systems recognizing, learning and optimizing otherwise imperceptible mathematical patterns to maximize efficiency and minimize cost
• Maximize productivity as robots do not tire and can, therefore, maintain a consistent level of performance indefinitely
• Reduce cost as the only outlays required are the initial purchase price and subsequent maintenance
• Enhance flexibility by rapidly adapting to constantly changing work environments
• Improve customizability by taking the form factor most suited to the job

Evolution
• Robotics has a long history, mainly related to factory automation.
• Advances in AI, improved connectivity and distributed computing models (cloud) brought new life into the field.
• Robots are becoming smarter, moving beyond doing repetitive tasks in factory scenarios to making informed decisions around their actions.
• Many large planes are now essentially robots, flying themselves with pilots only really needed to reassure passengers.
• Japan has historically pushed robotics to compensate for both a lack of workforce and increasing dependence as its population ages.
• New entrants, including Google with its expertise in AI, are betting on advanced robotics.
• New use cases, military drones for instance, are becoming increasingly important and their profile is rising.
• We may one day see a war fought by robot armies.
• The future will likely see robots gain social intelligence, artificial emotions and more natural interaction.

Issues
• The media’s portrayal of ‘robots’ often presents a distorted view of the truth.
• “Moravec’s Paradox” is still true: robots are very good at complex tasks, but fail (sometimes quite miserably) in day-to-day tasks, such as climbing stairs or opening doors.
• Staff may resist as robots’ role moves from facilitating human labor to supplanting it.
• Connected robots, like other connected devices, are raising concerns around privacy.
• In personal scenarios, such as situations relating to health and finances, people prefer a human touch.
• Badly designed smart machines that don’t function as expected could potential bring about serious unwanted consequences.
• Industry experts and scientists are concerned about the consequences of reaching the Technological Singularity — the point where machines would be more intelligent than humans

Autonomous
Vehicles
32 12 19 22 Adolescent 2018 Transformational Concept

Definition
Autonomous vehicles is an emerging field arising from the interaction of transportation vehicles and robotic capabilities that include environmental sensors, context awareness and autonomous decision-making using artificial intelligence. These self-driving vehicles rely on these technologies to enable them to drive themselves while recognizing and responding to their surrounding environment. Major players in this space include both automotive and technology companies, who are both competing and/or collaborating.

Applications
• Personal transport — including the autonomous (and semi-autonomous) cars created by Google and Tesla
• Industrial surveillance and transport — especially in harsh, wide or difficult to access environments such as mines, pipelines or traffic jams
• Urgent, specialized transport — including aerial drones for search and rescue
• Unmanned parcel delivery drones — such as those created by Amazon and Google
• High-volume logistics — including autonomous lorry platoons with just one driver controlling several vehicles
• Agricultural transport — such as for crop monitoring and cattle control
• Military transportation and surveillance — especially in dangerous environments

Trajectory

Impact
• Reduces the number of accidents caused by human error
• Optimizes the number of vehicles on the road
• Accelerates point-to-point logistics
• Minimizes risk in military and other dangerous scenarios
• Extends transport to those unable to drive, including children, seniors and individuals with disabilities
• Makes expensive services, such as aerial photography, affordable
• Offers alternative to current means of transportation
• Opens up new opportunities for novel services based on collaborative uses
• Expected to have transformational impact in automotive, insurance, energy, health, defense and city planning
• Projected to have greatest impact once vehicles are fully automated as mutualization will be much easier

Evolution
• The long-term goal is for fleets of autonomous vehicles to complement or even replace traditional, human-manned
vehicles.
• Though still very active in their R&D phase, these companies are already testing autonomous vehicles.
• The main R&D focus is around helping drivers reach their designated destination effortlessly, though they also want to ensure drivers feel satisfied and safe.
• Governments are beginning to play a more active role, crafting regulations and providing funding for R&D.
• Wide adoption is predicted to be around 2020, but that may be a bit optimistic.

Issues
• Cost: Sensors used in autonomous vehicles, such as the LIDARs that illuminate the surroundings with a laser and analyze the reflected light, are still expensive.
• Complex environments: The variability of conditions (weather, environment and other actors) is extraordinarily complex.
• Testing: Tests are very promising, but have mostly been carried out under controlled conditions.
• Safety: Safety requirements are very high since human lives are at risk.
• Regulations: Regulations for autonomous vehicles are almost non-existent. Creating them will be a complicated process.
• Ethics: As with all fields related to artificial intelligence and robotics, ethics play an important role where machines make tricky decisions that may put lives in danger.

Biocomputer 33 49 47 49 Emerging 2019+ High Concept

Definition
Biocomputers are computers that use biological materials such as DNA and proteins to perform computational calculations that involve the storage, retrieving and processing of data. They leverage the capabilities of living beings, relying on nanobiotechnology to engineer biomolecular systems that provide the computational functionality.

Applications
• Performing living processes — based on complex biomolecular interactions involving biomolecules coded by our DNA
• Providing advanced bio-templates for bacterial and viral analysis — sparking other biomechanical technologies since it’s currently the only self-replicating computing technology we know of
• Executing calculations that require extreme parallelism — which can be achieved by billions of molecules interacting simultaneously with each other
• Solving problems that cannot be deterministically solved in polynomial time
• DNA shows promise as a mechanism for long-term storage of information

Trajectory

Impact
• May offer an alternative to silicon-based systems — potentially faster, smaller and more energy efficient for some specialized problems
• Potentially offering massive parallelism, massive storage and high levels of artificial intelligence alongside low waste and low energy usage
• May also provide a whole new field of innovation in healthcare and life sciences — such as detecting cancerous activity within a cell and releasing an anti-cancer drug upon diagnosis

Evolution
• Less than a decade ago, the California Institute of Technology invented a basic logic gate inside living yeast cells using RNA molecules (molecules that, like DNA, carry genetic information).
• The IMEC world-renowned nanoelectronics research center in Leuven, Belgium, is directing research into biochips.
• A few years ago, Stanford University made the final component needed to build biocomputers available by developing the first biological transistor, called ‘transcriptor’, using DNA and RNA.
• In February 2016, parallel computation was achieved in protein filaments with biological agents.
• Biocomputers may offer some interesting applications in the future.

Issues
• Today, biocomputers are still a very prospective field of research.
• Despite being able to execute a high number of multiple parallel computations, the DNA-computer biocomputing approach has a slow processing speed with a response time that may be hours or days, rather than milliseconds.
• Biocomputing results are much harder to analyze than results from a digital computer.
• There is growing public concern around biocomputers’ relationship with genomics and, as such, the potential for biological catastrophes

Biometrics 91 10 84 24 Mainstream 2016 Medium Concept

Definition
Biometrics refers to the detection and measurement of specific measurable human characteristics. These can be used, with a certain degree of certainty, to differentiate individuals. Accuracy can be further improved by combining several different biometric mechanisms into ‘multimodal biometric systems’. Well-known examples include fingerprints,
retinal blood vessels, iris recognition and voice recognition, though there are many other human characteristics that can be used to identify an individual and may be useful in certain situations. These include heart rate and walking pace. And while DNA provides the ultimate biometric, DNA analysis currently takes too long to make it a viable option at this moment in time.

Applications
• Identifying and authenticating individuals — since biometrics are unique
• Enabling multi-factor authentication — when combined with other security mechanisms such as smart cards
• Offering an alternative or complementary security mechanisms — such as fingerprint recognizers in smartphones
• Enhancing consumer device security — particularly for smartphones
• Providing strong security in defense or homeland security scenarios — including detecting suspects at borders and crowd control

Trajectory

Impact
• Enables strong authentication since biometrics are unique and difficult to reproduce
• Provides advantages over passwords for access control and security systems since biometrics don’t require any memorization

Evolution
• Some biometric systems, including fingerprint scanners, have a long history behind them.
• Advances in both sensors and recognition software meaning false non-match rates (that disturb legal users) and false match rates (that are a security risk) are improving — though there’s a trade-off between cost, response time, convenience and reliability.
• Local use of biometric systems in trusted personal devices is now more acceptable.
• Also driven by the reducing cost of biometric systems combined with their integration with authentication and authorization standards in software, biometrics are gaining popularity in end consumer devices.
• Advances in artificial vision are improving biometrics based on pictures or video — including facial recognition.
• Behavioral or passive biometrics exploits dynamic characteristics — including keystrokes, mouse moves and even complex body movements such as gait — enabling continuous authentication.
• Future trust and compliance models will likely leverage a combination of biometric-based security and new smart phones capabilities in the never-ending quest to ease access control

Issues
• Biometrics requires specialized equipment for capturing the biometric characteristics
• Accuracy rates vary substantially among different biometric mechanisms and performance varies among large populations. A small proportion of users face recurrent difficulties.
• As such, biometrics is best suited as a complement to conventional authentication methods.
• Applicability for identification is more limited since determining who a person is and checking that it is one given person is difficult because of the tolerances needed.
• Mistrust is still strong because it’s very personal and you can’t change it like you can a password.
• In addition, privacy concerns may lead to user acceptance or regulatory issues.
• Biometrics may raise ethical and social implications for Corporate Social Responsibility imperatives.
• Biometric systems can be tricked. Facial recognition systems can be fooled by using 2D or 3D masks, for instance. Antispoofing technologies are emerging to address this problem.

Blockchain 61 20 31 49 Adolescent 2017 High Concept

Definition
Blockchain is a form of distributed database that uses cryptographic techniques to ensure records are stored sequentially and tamper-proof. Public, private or community, blockchains provide an alternative to a centralized ledger maintained and controlled by a single entity. In doing so, they enable a new model that allows trust to be established in a peer-to-peer network without needing a trusted third party.

Applications
• Financial applications — including bookkeeping, cryptocurrencies and securing transactions
• Non-financial applications — including (to some degree) managing intellectual property (anteriority), digital identities, electronic health records, along with voting, supply chain authentication and smart contracts

Trajectory

Impact
• Changes the value chain by removing gateways and intermediary processes in favor of direct trusted transactions between users — including merchants, services providers and customers
• Challenges industries where a central ledger was mandatory — such as banking or insurance
• Disrupts domains where a trusted third party was required
• Enables the emergence of industries where a central ledger was needed but impossible to put in place

Evolution
• Blockchain became well-known for its use as a public ledger for Bitcoin transactions and supporting various other cryptocurrencies.
• Its application has now widened to include diverse applications in almost all domains.
• Organizations, mostly industry but also some governments, are investing heavily in building proofs of concept for specific use cases.
• Ethereum, a platform that enables the creation and automation of smart contacts, is an example of one of the more sophisticated implementations.
• Blockchain technology company R3 is leading a consortium of 40 banks in testing the use of blockchain solutions to facilitate the trading of debt instruments.
• The Hyperledger Project, a collaborative effort created to advance blockchain technology, aims to identify and address important features for a cross-industry open standard for distributed ledgers.
• Blockchain protocols are evolving to extend functionality, address security risks and increase capacity.

Issues
• Blockchains, including Bitcoin, still remain widely experimental and lack a retrospective view.
• Blockchain cryptography is currently not post-quantic and could therefore be threatened by quantum computing.
• Security depends on the diversity and independence of a blockchain’s nodes. As such, a single player controlling a majority of nodes could theoretically circumvent some security mechanisms.
• The ‘proof of work’ uses a considerable amount of computing power and energy, though more energy-efficient alternatives, such as ‘proof of stake’ or ‘proof of burn’, are being sought and explored.
• Improvements are needed to ensure the volume of transactions required for mainstream applications can be effectively handled.
• Different blockchains may be required for different use cases.

Brain-Computer Interface 25 6 55 10 Emerging 2019+ Transformational Concept

Definition
The brain-computer interface (BCI) is a direct communication pathway between the brain and an external device based
on neural activity generated by the brain. While the majority of approaches use invasive devices, the most promising initiatives are based on non-invasive approaches. Here, electroencephalogram or EEG devices record the brain activity. EEG’s fine temporal resolution, ease of use, portability and low set-up cost has made it the most widely studied potential candidate for a non-invasive interface.

Applications
• Initially, simple detection of specific external signals and very basic interaction with objects
• Today, mainly used for assisting, augmenting or repairing human cognitive or sensory-motor functions — for controlling prosthetics, for instance
• Also gaming — where crude EEG devices are already available as gaming controls
• Later, providing more advanced communications — including speech recognition focused on basic phonemes and language patterns
• In the future, enabling communication, interactions with computing devices and more

Trajectory

Impact
• Allows people to interact with computers and devices without actions or movements
• Offering a completely new communications channel — especially non-invasive technologies and more advanced interactions (mainly communication)
• Impacting almost all aspects of manmachine interfaces — in both our personal and professional lives

Evolution
• First invented in 1924, research labs have since pursued EEG-based BCI extensively.
• During the 1970s, research used primates while invasive methods allowed prosthetics mechanisms to be controlled.
• Advances in neuroimaging resolution and EEG capabilities have allowed devices to be controlled more precisely.
• Initial consumer devices were crude and mainly used for gaming and stress control.
• Recent advancements showed that EEG-based BCI can accomplish tasks on a similar level to invasive BCI.
• It could eventually become one of the most advanced method for interacting with computers and devices.

Issues
• Invasive methods require complex medical procedures and may have undesired side-effects. Ensuring a permanent connection with specific areas of the brain is difficult as the connection may decay, for example.
• Non-invasive EEG is still at the early stages of research.
• Brain activity is permanent and, as such, it’s not easy to isolate significant signals. Non-invasive techniques will require additional effort.
• The brain is incredibly complex (more than 100 billion of neurons) and analyzing patterns based on electric signals is really challenging and compounded with individual differences.
• Longer term, there may be ethical risks such as the potential for mind-reading and mind-control.

Cloud Services Integration 85 10 61 35 Mainstream 2016 High Concept

Definition
The new computing continuum will be a heterogeneous environment based on the decentralization and federation of diverse computing entities and resource typologies. These will include multi-cloud (and cloud federation) models with their diverse, decentralized and autonomic management and hybrid cloud models that cross boundaries between internal and external cloud services or between public, private and community providers. Cloud Service Integration (CSI) provides a flexible means for assembling these various cloud-based elements in support of business process that transverse IT domains. Compute workloads are deployed across multiple cloud environments to provide an optimal delivery model.

Applications
• Global business — enabling the dynamic coordination of computing loads across geographies
• Integrated manufacturing — supporting information transparency and collaboration between MES/MOM, ERP and PLM systems
• Industry — helping gas companies, for instance, optimize extraction by enabling tight integration between the gas turbine manufacturer’s PLM cloud service, the company’s own private asset management cloud service and a subsurface analysis cloud service
• Enhancing information services — allowing organizations to augment their own services with public third-party services, such as weather or traffic
• Business continuity — reducing the cost of disaster recovery while enhancing flexibility and the agility

Trajectory

Impact
• Helps companies balance functionality, flexibility and investment protection
• Reduces cost by eliminating the need for hardware to absorb peak demands, reducing overall management cost and energy consumption
• Accelerates computing resource delivery while improving resource availability and optimizing resource utilization
• Helps small and medium cloud companies handle peak-loads, acquiring additional capacity as and when needed
• Brings workloads closer to where demand is, eliminating unnecessary latency
• Ensures compliance with national regulations when customers have specific restrictions about the legal boundaries in which their data and application can be hosted

Evolution
• Simple multi-cloud capabilities were first made available through APIs.
• These are now evolving into a globalscale service-based architecture.
• ServiceNow played a key role in showing industry how the various service providers can be brought onto a single platform.
• With private cloud now becoming mainstream, companies are looking at hybrid cloud models.
• In the longer term, the majority of companies are likely to adopt a multicloud strategy with services from multiple providers.
• The future will see a wider variety of service delivery venues available, allowing users to schedule and automate delivery of their workloads to the most suitable clouds.
• Trusted Information Brokers — the evolution of today’s cloud-ready identity federation services — will ensure seamless authentication and access control for information and services. They will use characteristics belonging to the information requester — such as age, organization or citizenship — while ensuring that proprietary or personal information is not spread unnecessarily.

Issues
• Compatibility across services is still an open issue, compromising further advance of inter-cloud service provisioning. Cloud market leaders are yet to widely adopt any standardization efforts, but multi-cloud may be the market force that pushes that adoption, breaking down current vendor lock-in.
• Multi-cloud environments increase the complexity of service level agreements since providers rely on diverse services from a more complex cloud ecosystem. Existing contracts will need to be analyzed and extended so chains of contractual relationships can be automatically established across multiple and heterogeneous cloud providers.
• Multi-cloud environments will need virtual networks to be set up across multiple cloud providers. Yet poor network performance is a roadblock for wider cloud adoption, while cloud federation requires extensions to the concept, techniques and primitives of cloud networking.
• The constant changes in security parameters enabled by dynamic multicloud management models is amplifying current security concerns.

Cognitive
Computing
36 27 20 45 Early Adoption 2018 High Concept

Definition
Cognitive computing can be seen as an integration of algorithms and methods from diverse fields such as artificial intelligence, machine learning, natural language processing and knowledge representation that enhances human performance on cognitive tasks. Breaking traditional boundary between neuroscience and computer science, cognitive computing is able to learn, to understand natural language, as well of reasoning up to interacting more naturally with human beings than traditional programmable systems. Cognitive computing systems have three main types of capability:
• Systems with engagement capabilities change the way humans and systems interact, extending human capabilities by providing expert assistance.
• Systems with evidence-based, decisionmaking capabilities continually evolve through new information, outcomes and actions.
• Systems with discovery capabilities uncover insights that even the most brilliant human beings might not discover, finding connections by understanding the vast amounts of information available around the world.

Applications
Applications of Cognitive Computing are multiple and will continue to expand:
• Expert assistance — solving tasks or long-term projects, answering questions, making suggestions, revealing patterns. Examples include personal assistants.
• Intuitive communication — understanding a person’s real intent, attitude, meaning, emotions and mood to suggest effective communication strategies in digital experiences. Examples include marketing and sales campaigns, TV advertising, political campaigns and matching the call agent to the caller’s personality in call centers.
• Accessibility for the impaired — specialized devices with embedded software responding to simple intuitive gestures such as finger pointing and reading signs, papers, books, labels and products.
• Intelligent narratives — providing automated, yet natural language, insightful summaries out of complex information. Examples include natural language explanations of analytical conclusions.
• Predictive customer engagement — leveraging knowledge bases linked to CRM systems to understand customers’ data, discover patterns, infer relationships, anticipate customer needs and actions and, thereby, predict the optimal engagement strategy.

Trajectory

Impact
• Redefines the nature of the relationship between people and their increasingly pervasive digital environment
• May allow machines to take over mundane activities, transforming jobs, companies, industries, markets and economies
• May open the door for new opportunities and business models around cognitive modelling, human touch interpretation and more
• Paves the way for smart machines with reasoning abilities analogous to the human brain

Evolution
• Computers using artificial intelligence have been a field of academia and research labs since the 1960s.
• But the world lacked the digital infrastructure, algorithms and knowledge bases to underpin such systems — until recently:
— The scale of computing power and storage has now reached critical scale.
— Huge new sources of structured and unstructured digital data are available for algorithms to ingest, analyze, and compare.
— Artificial intelligence improvements in voice, text and vision can all be leveraged by new sensors that create even more context and data that can be symbolically processed and acted upon.
— Software is everywhere and mobility opens opportunities for highly contextual software.
• The full scope of the processes and domains that will be impacted by cognitive computing is still elastic and emergent.
• As an expert assistant, cognitive computing may one day act virtually autonomously in various problem-solving scenarios.

Issues
• Cognitive computing requires unique skills, such as natural language processing and machine learning.
• Systems will be limited to the expertise and data they are exposed to.
• Concerns have been raised around privacy breaches and machines replacing the human workforce.
• Enterprises are not yet ready for computer systems as partners.
• Business and the end user must be trained on what to expect from the world of cognitive computing — where failure is accepted and learnt from, for instance.

Containers 87 22 91 12 Early Adoption 2017 Low Concept

Definition
Containers are a lightweight virtualization technology that provides applications with an isolated environment inside a single operating system instance. Containers provide users and applications running inside them with the illusion and experience of running on their own dedicated machine.

Applications
• Offering end-to-end application management, from prototyping and development to production
• Porting workloads across service providers
• Encapsulating solutions
• Enabling software-defined everything solutions
• Delivering adaptive applications or Infrastructure-as-Code where workloads react and reconfigure themselves to accommodate changes in the underlying infrastructure
• Enabling software componentization for solution architectures, including microservices

Trajectory

Impact
• Allows applications to be pre-packaged in ready-to-use and easy-to-deploy containers
• Improves service delivery (greater flexibility, maintainability, reliability, fault tolerance and security), opening the door to concepts such ‘Zero Downtime’ and ‘100% SLAs’
• Reduces cost and enhances capacity management through workload consolidation and improved resource utilization
• Facilitates interoperability among cloud offerings
• Plays a key role in enabling the cloud-native approach for deploying applications on cloud
• Integrates well with emerging architecture (micro-services) and development (DevOps) models
• Cuts the cost of testing and quality assurance processes
• Improves troubleshooting capabilities

Evolution
• The base technology has existed for many years in Unix-based operating systems, such as Linux or Solaris.
• Containers gained visibility in the market through Docker, which provides a lightweight alternative to virtualization technology that is very well integrated with development tools and cloud deployments.
• Containers are becoming the preferred model for software packaging and distribution.
• The container market is gaining a certain degree of standardization and interoperability.
• There are many innovation activities in the field, including:
— Advanced workload scheduling
— Trusted containerized computing
— Native support in embedded devices
— Optimized base operating system for containers
— Integration with other cloud technologies

Issues
• The technology ecosystem is still young and experiencing extreme volatility in its evolution:
— (Too) many tools with diverse and questionable maturity degrees
— Standardization and interoperability activities are still limited
— Support for some legacy technologies is limited
— Stronger security models needed
• The transition and transformation from virtualization to containerization will require changes in processes, practices, skills, culture and organizational boundaries.

Context Broker 70 43 75 43 Adolescent 2018 Medium Concept

Definition
Context brokers collect and store diverse data, then leverage analytics to deduce context from the interactions among the data before triggering actions based on that contextual information. In doing so they effectively empower data by extracting its meaning in relation to other pieces of data, bringing the full potential to data that might otherwise be leveraged in isolation. Context brokers play a critical role in the delivery of context-enriched services, which use information about a person or object to proactively anticipate the user’s needs and serve up the most appropriate content, product or service.

Applications
• Acquiring context-related information from multiple sources — including smartphones, sensors, the connected car and objects in your home and office
• Reasoning-based actions — analyzing context and acting based on the results, such as a map inside an electric vehicle prioritizing gas stations that have electric charging points
• Marketing — serving up relevant offers based around the customer’s current context
• Administrative functions — such as privacy and user preference management
• Business support — including transactions, metering and payments

Trajectory

Impact
• Increases the agility, relevance and precision of IT services
• Drives revenue generation and increases customer loyalty through IT-enabled business models that include subscriptions, advertising and more
• Enhances user experiences by ensuring personalized services are more relevant, particularly when combined with advanced analytics such as recommendation algorithms
• Enables the introduction of context-enriched services in mobility and the Internet of Things (IoT) where information push is favored over pull and end-device processing might be limited
• Increases the potential for economic growth in domains such as smart cities, smart manufacturing, smart agrifood, smart energy and domotics

Evolution
• Context brokers are expected to play a key role in the future IoT landscape, assisting enterprises looking to enter the mobility space, in particular providing the answer to complexities around sourcing and federating content.
• Their central position and access to diverse information creates significant potential for combining with leading-edge analytics algorithms.
• Investment in accelerator programs such as FIWARE is expected to boost potential for the creation of many of the key technologies needed to support a broad context broker uptake.

Issues
• Users may have different needs in a similar situation according to their state of mind and, as such, determining what
is most relevant to them is extremely difficult.
• Acceptance and usage will depend on assurances around personal privacy and protection against information leakage.
• Distributed yet correlated events can be difficult to associate, such as those coming from multiple devices controlled by the same person for instance.
• The quality and complexity of the data analyzed may vary since it comes from multiple data sources and in diverse formats.
• Current solutions lack real-time aggregation and filtering.

Deep Learning 70 6 62 12 Early Adoption 2017 Transformational Concept

Definition
Deep Learning is a branch of machine learning with its roots in neural networks where multi-layered neural network algorithms attempt to model high-level abstractions in data. Currently most applications use supervised learning, where a network is trained with a large set of labeled data examples for each category (for example, an image of a cat is labelled ‘cat’) to produce a model that can then be used for mapping new samples. Unsupervised learning, on the other hand, is where machines identify objects, text or images without having been specifically trained on a related dataset. Some Deep Learning models build multidimensional spaces from texts. Here the network ‘discovers’ hidden semantic relations between words and places them according to their proximity. For example, in a word space created from historical books it would determine that ‘King’ – ‘Male’ + ‘Female’ = ‘Queen’. Other models follow patterns learned from examples to generate text or optimize a sequence of actions to achieve a goal.

Applications
• Image and speech recognition — such as voice commanded systems, self-driving cars, face recognition, video surveillance, image tagging and visual diagnosis
• Text recognition — including automatic translation, sentiment analysis, keyword extraction, summarizing text, automatic question-answering machines and information retrieval
• Action planning — including generating marketing planning
• Gaming — such as chess or Go
• Robotics — helping robots learn about objects without human assistance, for instance
• Fraud and security — primarily detecting anomalous behaviors

Trajectory

Impact
• Has enabled significant progress in challenges such as large-scale image processing and automatic speech recognition
• Outperforms all previous techniques for image and speech recognition, allowing
thousands of categories to be recognized
quickly with unprecedented accuracy
• Provides very efficient approach for Natural Language Processing and text mining such that it is likely to replace other rules-based techniques
• Offers a breakthrough approach to understanding and planning

Evolution
• A breakthrough in neural network algorithms in the mid-2000s made building and training very large and multi-layered networks in a massively parallelized form possible.
• Neural networks were then combined with other machine learning and artificial intelligence techniques to perform complex tasks.
• Deep learning models were pre-trained to reduce initial investment in terms of sample and setup times for initial models for new applications.
• Computer architectures were then optimized based on single precision graphics processing units and fieldprogrammable gate arrays.
• A Google Deep Learning program beat a human champion at the game of Go.

Issues
• Demand very specialized and scarce resources.
• Black boxes that provide answers but don’t give hints about the reasoning behind the result.
• Different problems need different algorithms, so it is not always easy to reuse them.
• Over reliance may give a false sense of security, and create problems with incorrect results.
• Results may lead to undesirable social outcomes: encouraging discrimination against specific populations, for instance.

Digital Workplace 75 42 84 29 Early Adoption 2018 Medium Concept

Definition
The digital workplace is a combination of asynchronous messaging, real-time voice
and video communications, screen shares, content and context. Those elements of content and context are provided through documents, pictures, URLs, sound and video.
At its very simplest, the digital workplace can be seen as the merge of three areas into a single workplace experience:
• Unified communication (UC) around voice and video
• Modern team collaboration around documents and projects
• Events coming from existing business or office applications such as scheduled meetings, incoming orders or new sales leads
It may also include enterprise social networking, shared drives, conferencing, note taking, presentations, task management and content management. Though there are, in actual fact, many different capabilities and functionalities available in such collaboration and communication environments. Every day new relevant technologies and solutions appear.

Applications
• Fields of application are broad, but include the examples below
• Connecting internal employees to solve a specific problem, managing specific internal projects, documents and workflows and converging knowledge across functional areas
• Building communities for partners, customers or on-demand workers and simplifying collaboration with peers, customers and partners
• Improving customer experience within customer support, marketing engagement and partner enablement and driving insights from customer-facing teams
• Handling time-critical team working tasks effectively — including emergency calls within next generation 112 and 911 solutions
• Helping to save lives and rebuild communities in severe disaster scenarios
• Making Homeland Security more effective
• Providing intelligent assistance
• Enhancing virtual reality applications
• Enabling the practical application of sensing devices

Trajectory

Impact
• Provides a single tool for collaboration and communications, bringing conversations and content together in one place
• Improves productivity and enables contextual communications
• Significantly reduces the burden of email by moving email out of workflows
• Drives deeper customer insight, seamless interaction and innovation across the entire customer experience ecosystem

Evolution
• Traditional tools were purely focused on the internal or external collaboration while unified communications (UC) tools did not address team collaboration around documents and vice versa.
• Established messaging and UC applications evolved while new native digital workplace products began to emerge.
• UC solutions and team collaboration solutions converged, driven by increasing frustration with email.
• Some UC vendors are already offering digital workplace solutions.
• Digital workplace solutions could benefit from other emerging technological areas such as:
— Exascale computing — sharing realtime simulations, for instance
— Big Data analytics — analyzing existing data and sharing dashboards among the team, for instance
— Deep Learning and artificial intelligence — finding related data, images and text similar to features or problems defined by the team, for instance
• The architecture of the digital workplace is still evolving, moving from centralized systems with light clients toward more intelligence at the workplace itself.
• The development of cloud and mobility has led to the new architecture evolving once more.

Issues
• Change management is vital in new collaboration initiatives, to ensure the adoption needed to deliver return on investment.
• Knowledge workers have a tendency to resist change for change’s sake, but will happily embrace change that makes their jobs easier and their results better.
• External collaboration and communication, along with the massive number of collaborator tasks, may need personalities and shared data to be validated.
• Standards need to be defined to address issues such as privacy and ethics, particularly for emergency applications

Digital
Signage
92 21 83 71 Mainstream 2016 Low Concept

Definition
Digital signage uses electronic technologies to display information or deliver content. Content can be adapted to context, such as whether there are people watching or just passing, and the signage can interact with users, through a touch interface or a motion detection system, for example.

Applications
• Advertising — including offering promotions and encouraging return visits
• Entertainment — at theme parks, museums, cinemas and more
• Displaying information — such as news, weather, directions, traffic, pricing, menus, programs and even emergency information
• Improving visibility — including offering the shopper all variants of an article in the virtual store
• Enhancing the retail experience — allowing the user to try new garments virtually (when combined with augmented reality)
• Providing information — in kiosks and vending-machines

Trajectory

Impact
• Offers a more flexible alternative to signs and posters
• Increases the effectiveness of signage
• Improves the user experience through the interactivity it provides
• Makes an entertainment experience more exciting
• Can be updated easily and instantly, and managed remotely and centrally
• Allows content to be dynamically adapted to the context and the audience
• Enables richer, immersive user experiences through advanced multitouch, 3D rendering engines and responsive designs

Evolution
• Early digital signage used video projection or LED walls to deliver digital content but was costly and there were issues with visual quality.
• Digital signage moved from simple information displays to interactive and immersive devices.
• Large flat, high-definition screens (plasma, LCD) have boosted use of digital signage.
• Kiosks and digital signage are converging and quickly becoming an integral part of the omni-channel digital experience, particularly within retail.
• Convergence with mobile applications and augmented reality will provide an even more interactive and personalized user experience as rich and interactive content becomes increasingly important.
• Analytic capabilities are being incorporated. These measure the clicks, navigation and interactions that help retailers maximize impact and improve the user journey.
Digital signage is moving into the cloud and the Internet of Things (IoT). It will soon be able to interact with beacons, sensors and wearables.
• Artificial intelligence and Deep Learning are being explored to allow digital signage to help and interact with users better.
• Transparent screens that include LCD displays are emerging. When the screen is turned off, the glass looks like a window; turn on, the image appears.
• Glasses-free 3D and holographic images are emerging, but with limited traction so far.
• Screens that are bendable and can, to a certain degree, be adapted to different shapes are a subject of intense research. These flexible screens rely largely on existing OLED (organic light emitting diode) or AMOLED (active matrix light emitting diode) technologies.

Issues
• The domain is still young. What effective digital content looks like and what the key success factors are is not yet fully understood.
• Equipment is more expensive than conventional displays; although cost has to be balanced with benefits.
• Delivering a multi-channel experience increases implementation and integration risk and complexity.
• Rich content and rich immersive and responsive user experiences add additional complexity and cost.
• The growing variety of devices and their diverse form-factors and operating systems is also increasing complexity.

Distributed Social Networks 78 58 77 78 Emerging 2018 Low Concept

Definition
Distributed social networks (DSNs) refer to social networking platforms developed by social network initiatives and operated in a federated and distributed mode. Many such projects are federated under the ‘federated social web’ banner.
Critical social networks functions (such as personal information sharing, messaging, relationship management and content sharing) are enabled by emerging open standards and protocols. Although DSNs may have similar use
cases to centralized social networks, they have a stronger emphasis on privacy and control of personal data. Many try to ensure (personal) data is owned by the end-user rather than under control of the system/ server administrator. Their peer-to-peer operating model and focus in privacy means they tend to be focused in specific scenarios.

Applications
• Privacy-oriented communities of interest
• Ephemeral or temporal communities
• Inter-organizational communities

Trajectory

Impact
• Eliminate the risk of being locked into an integrated solution
• Provide better control on the data exposed through social networks, enhancing data security and privacy
• Make creating business revenue difficult since personal data is owned by the end-user

Evolution
• Domination of integrated social networks such as Facebook led to worries about the control of personal data and the monetary interest of social networking companies.
• Startups, such as Diaspora, proposed distributed models, standards and protocols to guarantee data ownership to users, evolving established standards in the digital identity, instant messaging, telecommunications and web worlds.
• However, these initiatives struggled to grow significantly to compete with the established giants.
• Diaspora, for instance, still exists as a notfor- profit, open source project.

Issues
• DSNs are still heavily influenced by their background and lacking in maturity and standards.
• Knowledge sharing and the sense ofdiscovery, the most important elements of social networking, have been exchanged for privacy.
• The censorial workload users encounter doesn’t facilitate inclusion.
• Where large communities do exist, migration from existing networks is likely to be slow.
• There is additional competition coming from emergent communication technologies, such as messaging platforms.
• Legislation around consumer privacy might make existing social networks adjust their attitude, eliminating the need for DSN.
• Finding a healthy DSN business model is not easy since personal data cannot easily be harvested and used to make money.

Edge
Computing
75 26 69 41 Early Adoption 2017 Medium Concept

Definition
The growth of the Internet of Things (IoT) and the emergence of ever-richer cloud services together call for data to be process at the edge of the network. Edge computing is also referred to as fog computing, mesh computing, dew computing and remote cloud. It moves applications, data and services away from the centralized model of cloud computing to a more decentralized model that lies at the extremes of the network. Ubiquitous (and sometimes autonomous) devices — including the laptops, smartphones, tablets and sensors that may not be continuously connected to the network — communicate and cooperate among themselves and with the network to perform storage and processing tasks without the intervention of third-parties. Edge computing covers a wide range of technologies: from wireless sensor networks and mobile data acquisition to distributed peer-to-peer ad-hoc networking and processing and more.

Applications
• Connected vehicles — with their wide variety of interactions and connections (including car-to-car, car-to-infrastructure and wireless and mobile networks)
• Industry 4.0 — ensuring smart factory initiatives are scalable
• Smart cities — allowing cities to scale data-driven citizen services
• Smart home — releasing the burden on internet bandwidth as a growing number of smart devices share more and more information
• Online shopping — manipulating the frequently changing shopping cart closer to the consumer
• Mobile commerce — such as mobile business models for finance, advertising and retail, bringing affordable scale to potentially computationally-intensive analytics
• Mobile healthcare — including health monitoring services and patients’ records management systems
• Resource-intensive end-user applications
— including augmented reality, mobile gaming, media streaming, home multimedia sharing

Trajectory

Impact
• Brings computation and storage closer to the source of the data, ensuring the results of analytics and other processing are rapidly available and highly accessible to the systems that need most them
• Addresses latency issues detected in large Internet of Things (IoT) scenarios
• Conserves bandwidth and reduces privacy and security risks by eliminating unnecessary network transmission as an increasing number of ‘things’ and connected devices generate growing volumes of data
• Lightens the load of centralized cloud servers
• Expected to enable a broad spectrum of use cases and applications for which traditional cloud computing is not sufficient

Evolution
• Networks were combined with typical cloud principles to create decentralized and dispersed cloud platforms.
• Growing volumes of IoT-created data are increasingly being stored, processed, analyzed and acted upon close to, or at the edge, of the network.
• Edge currently relies on specific vendor solutions.

Issues
• Applications written for an edge scenario will often need to work on heterogeneous environments.
• Data reported from different things may come in a variety of formats. Standardization is needed to enable interoperability among devices and sensors within both edge and traditional cloud environments.
• The potentially thousands, or even millions of small devices and sensors, in edge computing set-ups will require a new style of device management. This may potentially need to be decentralized and able to scale to degrees unprecedented in today’s existing cloud architectures.
• Envisaged as multi-tenant, edge computing set-ups will require specific isolation mechanisms to avoid security and privacy concerns.

Exascale 50 47 91 9 Adolescent 2019+ Medium Concept

Definition
Exascale supercomputers refer to high performance computing (HPC) systems that are capable of at least one billion billion calculations per second (one exaFLOPS) — a thousand-fold increase over today’s petascale supercomputers. It provides a major step forward in addressing the new challenges of the 21st century at a time when all sectors (but particularly industry, academia and science) are demanding increasingly powerful computing systems that may leverage cognitive computing for resolving problems involving ever-growing volumes of data. Exascale is believed to be the order of processing power of a neural network as big as the human brain and, as such, is the target power of the Human Brain Project.

Applications
• Smart cities with high-quality urban services — managing city-level traffic, monitoring the motion of citizens around city, providing data-driven real-estate valuations, monitoring disease spread and more
• Climatology — delivering finer-grained and more reliable predictions and understanding the exact location and time of severe weather phenomena
• Environment-friendly engines — simulating combustion chamber performance with more precision during the design phase to reduce CO2 emissions, fuel consumption and noise levels
• Genomics — including enabling predictive diagnosis, more efficient treatments and customized dosing
• Oil and gas exploration — using simulations to predict whether oil wells will fulfill expectations more accurately before drilling begins, for instance
• Agriculture — reinventing to meet 21st century demand by developing precision agriculture and reducing pesticide use while taking into account climate change,ground quality alteration and plantbehavior
• Astrophysics — such as gaining a better understanding of our solar system and universe and calculations for future space missions

Trajectory

Impact
• Helps resolve challenges brought about by demands for intensive computing and analysis of massive datastreams
• Enables the resolution of complex problems that are currently impossible to solve within a reasonable time
• Increases productivity and competitiveness by easing time constrained processes and speeding up decision-making
• Allows products to be designed faster and more efficiently
• Helps with anticipating multi-dimensional phenomena such as climate change by allowing them to be simulated to a finer level of detail
• Will also advance consumer electronics and business information technologies through its innovations in power efficiency and reliability

Evolution
• Exascale is vital for helping to resolve challenges in research, applied science, industry and society.
• Large countries have set ambitious goals to become leaders in exascale, building large dedicated research programs and associating high performance computing (HPC) vendors.
• In Europe, the European Commission is supporting several exascale projects under its Horizon 2020 program. These are aligned with the Strategic Research Agenda proposed by the European Technology Platform for High Performance Computing (ETP4HPC) and pioneered by Bull, the technology brand from Atos. Bull announced the first open exascale-class supercomputer range (Bull sequana x1000) at the end of 2015.
• In the US, exascale research is supported by a National Strategic Computing Initiative (NSCI).
• In Japan, the RIKEN Advanced Institute for Computational Science is planning an exascale system for 2020.
• China has also strong ambitions in exascale

Issues
• Exascale is dependent upon a new generation of supercomputers for tackling its four main challenges:
— Power consumption needs to be reduced.
— Compute will need to be able to cope with massive data pools, rising from petabytes to exabytes.
— Application performance, with a tens-of-millions-way parallelism, needs to be accelerated.
— Systems will experience various kinds of faults several times a day because of their massive number of critical hardware and software components. Self-healing systems (resiliency) are required.
• A new generation of massively parallel software is also needed to extract more parallelism, handle increasingly hybrid configurations and support greater heterogeneity.

Fabric-Based Computing 64 57 62 82 Adolescent 2018 Medium Concept

Definition
As the server landscape becomes more software-defined, the fabric-based computing model comprises a mesh of interconnected nodes that work together to form a fabric that appears unified when viewed from a distance. The nodes consist of loosely-coupled virtualized storage, networking, processing and memory functions alongside peripherals, each of which can scale independently of all other nodes. These resources can be repurposed easily or even automatically. Datacenter infrastructure management (DCIM) layers, and potentially applications, dynamically negotiate their resource requirements with the datacenter services provisioning layer. In contrast to grid computing, fabric solutions are not aimed at a specific scenario.

Applications
• Enabling the fully software-defined data center with its improved agility and reduced time-to-execution
• Addressing the sometimes near-real-time demands of Big Data and contextual smart mobility

Trajectory

Impact
• Provides a fully-meshed physical network upon which the software-defined data center can define its services
• Provides the high levels of granularity needed to adjust resources to perfectly match requirements
• Aims to address problems that inhibit full-scale virtualization of the entire data center
• Reduces the number of physical changes required and decouples them from service provisioning
• Allows deployments and configuration changes to be executed in near real time
• Means IT resources can be rapidly aligned to changing business demands
• Optimizes performance and power consumption
• Expected to increase overall resource utilization to over 80 percent on average
• Reduces cost through better resource utilization and automation

Evolution
• Fabric-based computing is initially being developed alongside specific architectures to address performance or point issues.
• The Open Compute Project, which aims to generate cost-effective generic hardware for large users, is leveraging fabric-based approaches.

Issues
• Fabric-based computing requires radical architectural changes to the data center — including higher switching speeds on the physical level and networks defined by software.
• Operating systems and hypervisors need to be able to fully leverage the scalability and the granularity of fabric-based data centers.
• Application development frameworks will need to be fabric-aware.
• Current solutions are largely proprietary, with limited hardware interoperability and customized management interfaces.
• Standards are required to prevent fabrics from becoming the new mainframes, with vertical silos and a significant cost of change

Immersive Experience 63 35 72 33 Early Adoption 2018 High Concept

Definition
An immersive experience is one that is totally absorbing, that allows users to disconnect from the real world and lose
themselves in a simulated dimension. Immersive experience technologies encompass a wide range of devices that
assist in making the experience more absorbing and, by doing so, make the technologies more invisible to the user.
These include:
• 3D displays — display devices that create the perception of depth
• Haptic devices — which add the sensation of touch
• Holographic user interfaces — laser-based volumetric displays where users interact with holographic images
• Virtual reality (VR) — digital simulations of real world environments

Applications
• Entertainment — including movies, TV and gaming
• Health — used by surgeons and radiologist and to ease remote operations
• Maintenance — assisting engineers in their diagnosis
• Marketing — for demonstrations
• GUI — 3D applications and websites
• Natural interfaces — for user input and feedback
• Visualizing data
• Conferencing

Trajectory

Impact
• Enables a more natural interaction with computing devices
• Transforms computing devices into information appliances, which give the user a more positive feeling

Evolution
• The first haptic input devices were created a while ago, but with no true purpose.
• The first generation of consumer virtual reality products are now available, but it will take a little time for these to be
integrated into work activities.
• As information appliances, solutions will become more invisible over time.
• Next-generation haptic feedback systems will make VR more realistic (and thus immersive).
• The display (VR) and input (haptics) will become increasingly attached to the human body.
• Eventually these may be superseded by cybernetic implants.

Issues
• Immersion levels will be limited by the quality of the information available.
• New conceptual thinking is needed to identify fitting immersive experiences.
• High levels of computing power are required to drive high-definition immersive displays.

In-Memory Computing 83 14 53 57 Mainstream 2016 High Concept

Definition
In-memory computing is a computing style where the central memory of the computer or networked computers running applications acts as the primary data store for the — potentially multi-terabyte — data sets used by those applications. These applications then use traditional hard-disk drives to persistently store in-memory data to enable recovery, manage overflows situations and transport data to other locations.

Applications
• Providing near real-time access to information (not only data) — in seconds instead of hours/days such as connected  living, e-commerce, CRM and ERP
• Providing intensive analytics — in domains such as Business Intelligence, Big Data and insight platforms for CRM, Industry 4.0, smart cities, supply chain planning and security intelligence
• Providing complex event processing — in domains such as High Frequency Trading, predictive monitoring, intelligent metering and fraud and risk management
• Edge computing — leverages in-memory computing platforms to enable local IoT data to be analyzed in real time, which in turn allows decisions to be made in the moment.

Trajectory

Impact
• Drives business transformation any opens up new opportunities
• Provides faster processing for ever-increasing data volumes
• Boosts performance by enabling low latency application messaging, mixing transactions and analytics on same data set, shortening batch processing and delivering real-time contextual event correlation and processing
• Frees organizations from non-productive activities such as data consolidation and reconciliation
• Enhances agility, enabling companies to operate in real time
• Simplifies next generation digital processes — including unified data analysis
• Accelerates processes, up to 1000x faster with deeper analytics for immediate insights to actions
• Boosts strategic decision-making, making it easier to leverage predictive tools that can detect trends and simulate outcomes

Evolution
• SAP has taken the strategic decision to move all its applications to its SAP HANA in-memory platform.
• Others are progressively following suit for strategic, time-sensitive applications.
• Open source solutions are emerging in this space, Alluxio for instance.

Issues
• In-memory computing enabled applications are hardware intensive, requiring new generation ultra-high memory capacity servers to handle massive data volumes with perfect quality of service and security.
• Leading servers vendors are innovating to support this move, leveraging technologies often derived from high performance computing.
• Among them, bullion servers from Atos are recognized today as the most powerful x86 servers in the world in terms of speed and memory

Insight
Platforms
63 10 38 21 Early Adoption 2017 Transformational Concept

Definition
Insight platforms are the third generation of business analytics platforms, following the first generation (Business Intelligence), which focused on performance-tracking, and the second generation (Big Data analytics), which focused on behavioral analysis. This combination of new and existing technologies collects and analyses massive data sets from connected environments in real time, rapidly transforming that data into actionable (prescriptive) insights. Examples include:
• Streaming analytics — analyzing data in motion in real time to accelerate time-to-insight
• Distributed analytics — analyzing data in situ within a distributed architecture
• Prescriptive analytics — which makes predictions based on its Big Data analysis and then suggests decision options.

Applications
• Industry 4.0, smart cities, smart utilities, connected healthcare and smart grid — collecting and analyzing data from billions of sensors to proactively improve user experience, optimize assets value and life, create new services and reduce risks
• Secure operations — collecting and analyzing data from all IT and security components to detect threats and attacks and execute defensive measures on the fly
• Defense and homeland security — collecting and analyzing data from diverse sources to make and execute the optimal tactical and strategic decisions
• Retail and marketing — collecting and analyzing data about customer needs, desires and behaviors to proactively improve user experience and create new services

Trajectory

Impact
• Enables intelligent automation that is able to sense, predict and adapt to people, business and things in real time
• Allows analytics to be linked to business outcomes
• Are expected to be the cornerstone of the Internet of Things (IoT) and at the heart of future digital strategies, as
organizations’ digital nervous systems

Evolution
• At an early stage of their development, insight platforms are considered a critical technology for the future.
• For prescriptive analytics in particular they will increasingly leverage the learning capabilities of cognitive computing.
• These platforms are expected to be embedded within large-scale solutions, such as IoT and Industry 4.0, while also available as customizable platforms.
• Both independent software vendors and Open Source communities will provide components, while integrated solutions will come from integrators and SaaS providers.

Issues
• A broad set of capabilities are needed for building these platforms:
— Advanced ability to collect, aggregate and clean data from billions of sensors in real time
— Extreme computing power to analyze and create meaning from Exabytes of information on the fly or for rapid simulation
— Advanced algorithms for automated or human decision-making
— Real-time orchestration of prescriptive insight to action, strong embedded security
• Large enterprise platforms will often need to be customized for corporate and government specific processes.

Internet of Everything 44 8 45 12 Adolescent 2018 Transformational Concept

Definition
The Internet of Everything (IoE) is a ubiquitous communication network that effectively captures, manages and leverages data from billions of real-life objects and physical activities. It extends the Internet of Things (IoT) by also including people, processes, locations and more.
Networks of spatially distributed sensors and actuators (nodes), each with a transceiver and a controller for communicating within a networked environment, detect and monitor events (sensors) or trigger actions (actuators). Each has a unique identifier and the ability to transfer data over a network without human-to-human or human-to-computer interaction. Sensors and actuators vary in size and price, with some available on a microscopic scale. As such, they can be embedded into many different objects and deployed in many different environments — including adverse environments in remote locations. Sensors may include capabilities such as GPS, RFID, Wi-Fi or internet access. Some are even capable of detecting the approximate location of other nodes. Examples include heart monitoring implants, biochip transponders on farm animals, electric clams in coastal waters, automobiles with built-in sensors or field operation devices that assist fire-fighters in search and rescue.

Applications
• Automating operations
• Monitoring asset health, wear and location, traffic or the environment
• Enhancing healthcare
• Providing surveillance and security, including early warning systems
• Acquiring information about movements and/or terrain parameters
• Managing energy or water usage

Trajectory

Impact
• Provides large amounts of data for analysis, enabling businesses to make better-informed decisions.
• Ensures data availability by automatically rerouting data in the event of a node failure to optimize availability.
• Enables numerous value-adding services across healthcare, retail, city management and more.

Evolution
• The concept of the IoT first became popular in the late 1990s as a follow-up of ubiquitous computing.
• The development and cost reduction of wireless and mobile networks and low-power micro-controllers, which progressively made it possible to connect anything to the internet, brought the IoT to life.
• While it was initially most closely associated with machine-to-machine in manufacturing, energy and utility industries, its now covers a much broader, nearly universal scope.
• Projections predict 100 trillion connected devices in the world by 2030. As such, broader adoption of IPv6 is required to overcome the IPv4 addresses scarcity.

Issues
• Security, privacy and trust must be considered in developing IoT/IoE solutions.
• Companies must decide what data to store and how to store it, ensuring they have sufficient storage for both personal data (consumer-driven) and Big Data (enterprise-driven).
• IoE will significantly increase demand on datacenter resources and may even mean data processing, network connectivity and network bandwidth need to be re-architected.

IPv6 91 39 84 58 Early Adoption 2017 Low Concept

Definition
IP (Internet Protocol) is the main communication protocol underlying networks such as the Internet. IPv6 (version 6) was designed in the 1990s as the successor to IPv4. IPv6 includes adjacent features, such as address assignment, network renumbering and auto configuration. It addresses some of the shortcomings of IPv4 by providing support for multicast, manages of mobile data traffic more efficiently. Additionally, IPv6 embeds several configuration and discovery mechanisms that were added to IPv4 after its standardization.

Applications
• Real-time networking — as required by autonomous vehicles, remote tele-surgery, industry automation, video-streaming and voice over IP, overcoming the artificial bottleneck imposed by IPv4 on many of these use cases
• Mobility — providing unlimited scaling at routing level, with 5G mandating IPv6 as critical
• Internet of Everything — providing nearly unlimited addressing
• A platform for innovation — since it performs better, is simpler and easier to deploy than IPv4

Trajectory

Impact
• Allows more connected objects to connect to one another outside the span of a local area network
• Is vital for the sustainability of the internet, mobile internet and IoT-related business:
— A 128-bit address space means the number of possible IP addresses increases from a few billion with IPv4 to 340 trillion trillion trillion addresses (36 trailing zeros).
— Addressing is basically free, which is particularly important for IoT deployments where the number of objects to address may be very large.
• Includes hierarchical addressing, IPsec authentication and security, and improved ability to carry multimedia data.
• Simplifies administration and eliminates the need for workarounds such as Network Address Translation (NAT).

Evolution
• IPv4 was deployed in 1981 and IPv6 in 1999.
• The IPv4 address space became completely allocated in February 2011.
• Tunneling mechanisms and gateways were standardized by the Internet Engineering Task Force (IETF) to bypass IPv4 limitations and deal with the IP address shortage.
• Migration to IPv6 has started in many different areas, mainly on central infrastructures, though the uptake of pure IPv6 is taking longer than expected.
• The fraction of IP traffic using IPv6 is still significantly lower than IPv4: around 26 percent of US applications are IPv6 native and 10 percent of Facebook’s global traffic uses IPv6, for instance.
• Some of the networking community are advocating a global switch.
• The number of applications is growing, increasing the need for the IPv6 protocol. Issues
• The transition, which was announced in 1995 and supposed to last until year 2000, has been much slower than expected.
• Although support for IPv6 in major operating systems facilitates this transition, many legacy devices cannot be updated to support IPv6 and will not be replaced overnight.
• Technologies such as Carrier-Grade NAT and recent regulatory moves are helping IPv4 survive, thus harming the deployment of IPv6.

Issues
• The transition, which was announced in 1995 and supposed to last until year 2000, has been much slower than expected.
• Although support for IPv6 in major operating systems facilitates this transition, many legacy devices cannot be updated to support IPv6 and will not be replaced overnight.
• Technologies such as Carrier-Grade NAT and recent regulatory moves are helping IPv4 survive, thus harming the deployment of IPv6.

Location-Based Services NG 85 46 90 36 Early Adoption 2018 Low Concept

Definition
Geographical information systems (GIS) capture, store, analyze and display information referenced according to its geographical location. The next generation takes the third dimension into account. This 3D (spatial) representation provides a much more realistic representation of the world. Spatial data can be gathered from a wide array of sources, including global positioning satellites, beacons, Wi-Fi hotspots, remote sensors and visible light communication (VLC) sources such as Li-Fi. Visualization technologies allow companies to extract insights from this spatial data. The most basic use cases push content or activate or deactivate functionality based on geofences or address lists. More advanced spacial (3D) analytics uses a user’s or an object’s exact geographical position to deliver contextualized information and services

Applications
• Identifying and understanding assets and customers — anything from simple store locators to real-time displays of delivery driver locations or dense pipe networks
• Securing sensitive data — through geofences
• Enabling local experiences — such as triggering an offer when a customer enters a store
• Studying sound propagation, lighting (shades) or rain flow

Trajectory

Impact
• Improves customer intimacy as customers come to expect contextual products and services based on their current location
• Drives innovation beyond the industries that have traditionally used GIS
• Increases brand preference by delivering personalized customer experiences
• Increases revenue by enabling more effective marketing that influences purchasing decisions

Evolution
• Big Data technologies have made the storage and processing of large spatial data sets economically feasible and scalable.
• Cloud has made augmenting solutions with location-based context simpler.
• New tools are making spatial analytics more widely available.
• The popularization of smartphones, as well as glasses and other wearables, will strengthen this market.

Issues
• Location technologies in general, and beacons in particular, are immature.
• Traditional location technologies only provide a sufficient level of accuracy on the third dimension when they are combined.
• GPS is generally unusable within buildings. Integration with indoor positioning is required to ensure continuous positioning.
• Accurate indoor positioning requires specific equipment.
• Spatial analytics consume a vast amount of computing resources.
• When it comes to targeted marketing, insights such as past behaviors, preferences, needs and situations are needed to deliver relevant messages to individual customers; however, the more accurate the technology, the lower its reach.
• In addition, smart contextual messages can be difficult to deliver at scale.
• Proximity-based marketing and in-store location tracking are already raising privacy concerns.

LPWAN 81 29 79 41 Early Adoption 2017 Medium Concept

Definition
Also referred to as ultra-narrowband,Low-Power Wide-Area Network (LPWAN) wireless communication technology has a low power requirement and a long range, but a low data rate. LPWAN was designed to enable objects that don’t have a powerful source of energy to be connected, primarily to the Internet of Things (IoT). After all, most objects connected to the IoT only need to transfer small amounts of data, such as commands and statuses, and that operation only requires a small amount of power.

Applications
• Metering — gas and water in particular
• Monitoring — including waste, parking, land, livestock, forests and pipelines
• Tracking — items such as containers, bicycles, pets, indoor assets
• Fault signaling — such as with smart home appliances and alarms
• Control — street lighting and plant machinery for instance

Trajectory

Impact
• Allows millions of objects to be connected at low cost
• Are a simpler and more effective solution for IoT than traditional cellular networks, which were designed for high performance data transfer

Evolution
• Early LPWAN adopters first appeared in 2014.
• Country and global-sized networks are already being deployed.
• Waste monitoring and telemetry systems, whose refresh period is between one and four measures per day, are already working with LPWAN solutions from LoRa or Sigfox.
• Remote surveillance operators have started using LPWAN to monitor alarm systems as a more reliable alternative to GSM.
• Although expanding quickly, geographical coverage is still limited.
• LPWAN is expected to become mainstream around 2017.

Issues
• There are currently multiple LPWAN standards and this is impacting interoperability.
• For the IoT to really take off, LPWAN modem cost needs to drop below $1, provisioning below $2 and data cost to zero.

Memristors 31 29 66 21 Emerging 2019+ High Concept

Definition
Memristors are non-linear, passive electric components that have a resistance that varies according to the history of the
flow of electric charge through them. The component’s resistance reflects this history, hence its name combining “memory” and “resistor”. Memristor-based memory is non-volatile, with a density higher than 100 terabytes per cubic centimeter, high speeds and low power requirements.

Applications
• Providing advanced memory technology such as non-volatile random access memory
• Delivering fast, compact and low-power neuromimetic elements since memristors function using a continuous range while transistor-based logic only knows zeros and ones
• Implementing neuroplasticity by forming neuromemristive systems, a variant of neuromorphic computers

Trajectory

Impact
• Allow neural networks to move from computer simulations and onto their own physical implementations
• Have a longer term potential in neuromorphic systems
• Could outperform traditional computers at a fraction of the energy cost as compact neuromemristive systems

Evolution
• First coined in 1971 as a name for a theoretical passive electrical component to supplement the resistor, capacitor and inductor.
• The first physical example was created in 2008, paving the way for concrete applications.
• Experimental organic neural networks based on polymeric memristors are being created.

Issues
• Memristors are still experimental.

Natural User Interfaces 68 14 58 26 Early Adoption 2017 High Concept

Definition
Natural user interfaces (UI) are systems designed to make human-computer interaction feel as natural as possible. This wide range of technologies allows the user to leverage everyday behaviors, intuitive actions and their natural abilities to control interactive applications. These might include touch, vision, voice, motion and higher cognitive functions such as expression, perception and recall. Some natural user interfaces rely on intermediary devices while other more advanced systems are either unobtrusive — or even invisible — to the user. The ultimate goal is to make the human-computer interface seem to disappear. Examples include:
• Augmented reality — adding an additional intelligence layer on top of natural life
• Virtual reality — providing an immersive digital representation of a real or imaginary environment
• Mixed reality — merging of real and virtual worlds to produce environments where physical and digital objects co-exist and interact in real time
• Neuro interfaces — enabling a direct communication pathway between the brain and an external device based on neural activity generated by the brain
• Virtual retinal displays — images broadcast directly onto the retina, effectively augmenting the real world
• Body monitoring — reading body language with gestures beyond the hand and finger
• Haptics — digital feedback mimicking physical sensations
• Adaptive interfaces/emotion tracking — changing layouts and elements in response to the changing context or needs of the user
• Relational awareness — devices that, as an agent for the user, are aware of where the user is in relation to other people
• Affordances — providing a digital representation of a physical object that allows the user to take advantage of all the things they already knows about how to use that object
• 3D displays — display devices that create the perception of depth
• Holographic user interfaces — laser-based volumetric displays where users interact with holographic images
• Physical controls — where physical inputs are translated into digital outputs

Applications
• Voice control — using language to access a large set of commands
• Conversational agents — allowing users to interact without prior knowledge of a system’s commands
• Audio channels — opening up novel channels for data transmission
• Collaboration — allowing multiple users to controlling the interface simultaneously
• Utilizing three dimensions — providing a more direct connection to content by taking advantage of depth in movement
• Monitoring physical movements — helping systems (and users) understand more about themselves in order to learn and adapt
• Audio security — providing identification and authentication through voice signatures
• Visual security — providing identification and authentication through visual images
• Biometric security — providing identification and authentication through other biometrics

Trajectory

Impact
• Reduces complexity for users with user interface design incorporating more and more of the components needed to make user interfaces as natural as possible
• Decreases the level of user training needed, with training mainly focused on domain knowledge rather than interacting with the interface
• Allows computers and human beings to interact in diverse and robust ways, tailored to the abilities and needs of an individual user
• Enables complex interactions with digital objects in our physical world

Evolution
• The first attempts at natural user interfaces aim to provide an alternative to a command-line interfaces (CLI) and graphical user interfaces (GUI).
• Attention later turned to developing user-interface strategies using natural interactions with the real world.
• Touch and voice recognition are now increasingly appearing next to the more traditional mouse and keyboard.
• Interfaces have also begun to incorporate gestures, handwriting and vision.

Issues
We don’t yet have the complex technologies needed to make truly natural and seamless interfaces.

NFC 94 11 90 27 Mainstream 2016 Low Concept

Definition
Near field communication (NFC) allows devices, such as smartphones and wearables, to communicate wirelessly with tags, cards or other devices over a very short distance. It provides a natural and intuitive way for devices to interact with objects from the real world. Smartphones significantly increase the potential of NFC.

Applications
NFC devices:
• NFC tags — enabling access, providing related information or initiating a transaction, for instance
• Smart posters — offering a coupon or opening a website, for instance
• Contactless or dual-interface smart cards — delivering file/product updates
• NFC electronic shelf labels — which are a key enabler of the connected store
• NFC-enabled user devices, such as smartphones or wearables
• Ticketing — acting as a (wireless) badge, coupon or ticket
• Payments and micro-payments — through swipe to pay
• Data exchange — by tapping to exchange data, such as electronic business cards
• Loyalty — such as picking up information and rewards
• Security — enabling building access, for instance
• Printing — such as transferring pictures to a printer
• Retail — providing access to deals, product information, inventory and more, and enabling digital shopping carts and check-out

Trajectory

Impact
• Standardizes and facilitates the integration of contactless communication in end-user electronic devices:
— Extending the functionality of those products
— Making them interoperable with existing contactless infrastructure
• Dematerializing tickets, coupons, badges and cards
• Making payment more convenient, with NFC wristbands gaining popularity in micro-payments and tap-to-pay travel
• May allow technology companies to disrupt the payment/transaction value chain
• Could enhance privacy in the future smartphone applications

Evolution
• Nokia introduced a NFC capable phone in 2006.
• Google produced its first Android NFC phone in 2010.
• Google Wallet, an app that used NFC to make mobile payments, first appeared in September 2011.
• Google announced support of Host Card Emulation (HCE) in Android in 2013, enabling secure mobile payment and access solutions.
• Visa and MasterCard added their support for HCE in 2014, leading to wider adoption of NFC by banks.
• Apple entered the market in 2014, with Apple Pay appearing later that year.
• NFC-enabled payments are gaining increased traction, providing a solid foundation for NFC’s growth.
• NFC technology has been integrated into wearables, tablets, industrial handheld computers and more.
• Non-payment NFC applications are expected to thrive, particularly in retail.

Issues
• Industry-wide standards are only just beginning to emerge.
• Lack of consistency across device types means a solution may not perform consistently.
• Two different security models (SE and HCE) exist and devices may need to support both.
• Trusted Service Management (TSM) is needed to ensure applications from different issuers (including Telcos, banks, transport companies, governments and cities) coexist securely.
• A lack of maturity and consistency, coupled with short lifecycles, may lead to infrastructure assets being refreshed without NFC as the primary mobile strategy.

Open Source Hardware 93 34 89 44 Early Adoption 2017 Low Concept

Definition
The open source hardware model extends the ideas and methodologies popularized in open source software development to hardware development. Documentation — including schematics, diagrams, list of parts and related specifications — are published with open source licenses so other teams can modify and improve them, based on specific needs. These are sometimes combined with more traditional open source software, such as operating systems, firmware or development tools. For instance, both Linux and Android operating systems are being used in embedded devices.

Applications
• Prototyping new devices, such as internet-connected sensor devices
• Avoiding vendor lock-in
• Maintaining systems long term, over decades, through leveraging hardware design availability
• Providing server and network infrastructures for internet-scale data centers, mainly driven by the Open Compute Project
• Fostering the adoption of industrial vendors’ core technology standards while attracting an ecosystem of partners
• Accelerating innovation
• Popularizing the distributed fab lab model proposed by MIT

Trajectory

Impact
• Supports hardware innovation and standards development
• Enhances long-term maintenance capabilities
• Enables the creation of low-cost devices and infrastructures
• Offers alternatives for commercial data center server and networking infrastructure
• Allows enterprises to build materials dedicated to their needs, while fostering collaborative developments from an open community of contributors

Evolution
• Initially primarily initiated by the open source community.
• Traction in the enterprise world — from both industrial users and hardware vendors — is increasing.
• Interest is expanding across diverse industries — including processors, servers, 3D printing and prototyping, ambient information devices, mobile phones, notebooks and robotics.
• There is a great deal of interest in emerging countries.
• Open source hardware is expected to continue to evolve in three directions:
— Supporting hardware innovation and standards development
— Guaranteeing long-term maintenance capabilities
— Creating low-cost devices and infrastructures, including network infrastructures

Issues
• Business models are adapted to core technologies, where users and vendors have a mutual interest in sharing innovation, fostering standards and minimizing cost.
• Certification, which beyond small project communities, is required for many uses — security or radio electric interference certification, for instance
• Fabrication occurs in small batches in small projects making for low cost-efficiency.
• Intellectual property issues could block some developments.

Plastic Transistors 83 62 87 83 Emerging 2018 Low Concept

Definition
Plastic transistors reflect the advances in material sciences that provide us with alternatives to traditional electronics. Based in organic polymers that have electronic properties, these include OLED (organic light-emitting diodes). These materials are easily printed over different types of substrates, allowing complex circuitry to be printed on flexible plastics — something that is not possible with traditional electronics.

Applications
• Flexible displays
• Conductive ink
• Printable computer circuits
• Transparent circuits
• Wearable computing
• Smart bandages
• RFID tags
• Plastic solar cells

Trajectory

Impact
• Provides the flexibility that allows computing abilities to be added to dynamic environments including clothes, tight enclosures and organic tissues
• Integrates easily with other additive manufacturing methods, such as 3D printing
• Enables the low-cost volume fabrication that is essential for widely distributed solutions such as RFID tags

Evolution
• Basic R&D around organic electronics began during the last quarter of the 20th century.
• OLED screens and printed RFID tags are widely deployed today and popular in consumer devices, including smartphones and e-readers.
• Other organic electronic technologies look very promising for cheap and durable solar energy panels.

Issues
• Poor electronic efficiency, when compared with traditional electronics, may impact some use cases.
• In some domains, including health-related devices, safety issues may limit adoption.
• For the technology to really take off it needs to be properly integrated with other emergent fabrication technologies, such as 3D printing in a print-your-own-electronics model.

Privacy-Enhancing
Technologies
51 26 63 28 Adolescent 2018 High Concept

Definition
Privacy-enhancing technologies (PETs) refer to technologies involved in protecting or masking personal data (whether employees, customers or citizens) to achieve compliance with data protection legislation and sustain customers’ trusted relationships. PETs not only protect very sensitive data (such as credit card information, financial data or health records), they also shield the very personal information (including purchasing habits, interest, social connections and interactions) that digital users are keen to allow some services to leverage, but only provided some privacy is respected. As such, PETs reach beyond the technologies traditionally dedicated to preserving data confidentiality (such as access control and encryption); they also encompass technologies and techniques for ensuring data usage is limited to the purposes intended or protected, including homomorphic encryption, data masking, anonymization and pseudonymization.

Applications
• Commerce, finance, Telco, public services and healthcare — any domains where customers leave sensitive personal data
• Social media and networks along with collaborative services — also domains where customers share selected data to a selected audience
• Personal employee data

Trajectory

Impact
• Gives individuals the confidence to share or trade access to their personal data for digital services
• Enables new data monetization business models while ensuring compliance to various data protection legislations
• Mitigates the financial and legal risk linked to regulation violation
• Supports Corporate Social Responsibility

Evolution
• Early work started in the mid-1970s, with the birth of the concepts of anonymity and unlinkability.
• Privacy homomorphism technologies were introduced at that time, but had many limitations and vulnerabilities.
• A breakthrough fully homomorphic encryption (FHE) scheme was proposed in 2009.
• More recently, business focus has been on mechanisms for digital identity management with privacy respect.
• Today, simplified schemes (somewhat homomorphic encryption-SHE) allow efficient evaluation of some simple and specific logic functions.
• The domain is evolving along with the concept of the ‘personal data economy’.

Issues
• The notion of privacy is a culturally evolving one and boundaries are moving between generations.
• The notion of consent is essential, but may not be sufficient over time since the data you accept to share today may reveal embarrassing in the long run.
• As a result, the domain is evolving around two contradictory trends:
— Some players are arguing that the very concept of privacy is dead in a digital world.
— Nevertheless, public authorities are frequently pushing new regulations to (such as the European ‘right to be forgotten’) protect customers and citizens

Quantum
Computing
22 35 32 43 Emerging 2019+ High Concept

Definition
Quantum computers are computation systems that use quantum-mechanical phenomena (such as superposition entanglement) to execute operations on data. Built over the basic element of the qubit (which is in a quantum superposition of states, hence having multiple values simultaneously), quantum computing’s main advantage is its ability to execute some types of quantum algorithms exponentially faster than the best possible classical computing alternative.
These algorithms are currently undergoing rapid development and could potentially one day break significant aspects of current technologies limitations in combinatory analysis. Applications range from decryption to operational research, optimization, simulation (with quantic models) and Big Data analysis (neural networks).

Applications
• Breaking cryptographic standards – emerging quantum safe cryptography aims to overcome this threat, deriving new cryptographic methods able to resist attacks by quantum computers
• Trading – in the financial sector
• Recognizing patterns – in defense, homeland security, Telco, utilities and insurance
• Speeding up lead compound discovery – in the pharmaceuticals industry
• Accelerating simulations – in the chemical industry and quantum physics

Trajectory

Impact
• Allows hugely enormous, complex problems to be solved in a reasonable amount of time • Accelerates machine learning and prescriptive analysis
• Threatens to break several existing cryptography standards
• May introduce a general uncertainty around the safety of currently safe encryption algorithms

Evolution
• The theoretical basis of quantum computing was established in the last quarter of 20th century.
• The first real quantum computing devices were invented around the millennium.
• The first commercial systems based on quantum technology (D-Wave) reached the market around 2010.
• Though the current practical implementation of quantum computers is limited to a few dozens or hundreds of qubits, rapid development is underway.
• Big internet companies (such as Google) recently began testing quantum-based technology.
• The U.S. National Security Agency, China and other governments are ramping up their quantum safe projects.
• Quantum computing (and other quantum technologies) is expected to become a flagship project for the European Union.
• Atos is among the very few companies in the world to work on quantum computing, targeting both Big Data and security applications.

Issues
• Quantum computing poses a threat to conventional information security systems, requiring the development of quantum safe cryptography before the widespread use of quantum computers.
• Kaspersky’s Labs has dubbed the unpreparedness for quantum attacks as the ‘Cryptopocalypse’.
• Trained resources are scarce in this very complex field and often unconcerned about quantum computing’s impact on business or society.
• Quantum programming languages are still very basic and require a different way of thinking.
• Quantum logic is hampering development on an engineering level.
• The technology currently requires low temperatures, physical isolation and other technical complexities that are currently more expensive than the computing itself.
• There is uncertainty over the future performance of the wide variety of competing quantum computing architectures.

SDx 58 33 74 24 Early Adoption 2018 High Concept

Definition
Software-defined anything/everything (SDx) is an approach that replaces legacy — and often specialized — hardware controlled by physical mechanisms with software running on commodity hardware platforms. The concept may be applied to a wide variety of aspect of an IT system including networking, compute, storage, management, security and more.

Applications
• Software-defined networking (SDN) — where a programmable remote controller, which is decoupled from the physical network devices, forwards the data through the network.
• Software-defined compute (SDC or virtualization) — decouples CPU and memory resources from physical hardware to create isolated software containers (virtual machines) that can be run simultaneously in the same physical server.
• Software-defined storage (SDS or storage virtualization) — decouples storage functions (including backup and software automating and optimizing the provisioning of the storage resources.
• Software-defined datacenter (SDDC) — has all resources (including CPU, memory, storage and security) virtualized and
delivered as a service.
• Network Function Virtualization (NFV) — combines SDC, SDS and SNS as a solution found within the Telco industry.

Trajectory

Impact
• Raises the level of automation possible in infrastructure management.
• Optimizes resource usage by making resources easier to manage
• Enhances performance by enabling dynamic, on-demand (re)allocation of resources
• Allows spurious changes of the resources’ state to be reacted to automatically, improving availability
• Reduces complexity and errors since there is no need to understand details around the underlying hardware
• Improves visibility of the status of resources, enabling the development of more sophisticated control functions, services and applications
• Allows traditional distributed control to become logical and centralized
• Cuts cost by improving resource utilization, opening up commodity hardware options and reducing system maintenance time and effort
• Reduces time-to-market for new products and services

Evolution
• Standardization efforts are already widespread and expected to continue to evolve.
• The SDN evolution, for example, is mainly being driven by the ONF (Open Networking Foundation) while being embraced by the IETF (Internet Engineering Task Force) and the ETSI (European Telecommunications Standards Institute).
• Related efforts are ongoing within industry and the open source community (OpenDaylight and OpenNFV for SDN, for instance)
• Companies are tending generally deploying open implementations and specifications that may then become candidates for de facto standards.

Issues
• The transition from legacy systems to SDx can be complex.
• The centralized logic has the potential to become a single point of failure. However, solutions that distribute this logically centralized control do exist.
• A change in a management policy might result in temporary loops or errors that then lead to failures. Ongoing research aims to tackle this issue.

Self-Adaptive Security 46.5 29 50.5 37 Emerging 2018 High Concept

Definition
With the growth of cloud, APIs and the Internet of Thing (IoT), cybercrime is constantly increasing in volume, sophistication and impact. As a result, the paradigm of security must change. After the perimeter security of the 90’s (which emphasized network defenses) and the in-depth security of the 2000s (which emphasized multiple protection layers), cyber defense strategies now evolved toward new self-adaptive security principles. This pre-emptive approach to security moves the emphasis from protection to real-time detection and response that adapts defenses immediately. Technologies and processes incorporate Security Operation Centers (SOC), which rely on new generation Security Information and Event Management (SIEM) technologies enhanced with machine learning and prescriptive analytics. Self-adaptive security also relies on new generations of context-aware security technologies (including Identity and Access Management, network security, device and smart machine security) that dynamically adapt to threats.

Applications
• Protecting information systems belonging to boundaryless corporations and open business ecosystems
• Offering better protection against the latest generations of threats, including Advanced Persistent Threats (APTs)
• Adapting security strategies to the specific business challenges of individual organizations within diverse industry contexts

Trajectory

Impact
• Mitigates attacks (including hacktivism, cybercrime from mafias, fraud, industrial espionage and cyberwar attacks) before they significantly threaten the availability, data integrity and confidentiality of systems
• Facilitates forensics by enabling effective and accurate aggregation of evidence

Evolution
• Experts estimate that detection and response share will move from less than 30% of IT security budgets today to 75% by 2020.
• Open platforms are being developing that will facilitate 2-way communication among multi-vendor security products.
• Self-adaptive security depends on the convergence between security and Big Data technologies, with the notable addition of machine learning to traditional SIEM technologies (see ‘Insight Systems’).
• Solutions are rapidly increasing the number of data sources (including the Dark Net) they are leveraging — well beyond classical security sensors and devices — to analyze suspicious behaviors and foresee or detect attacks.
• Self-adaptive security is becoming increasingly verticalized, taking into account industry-specific risks in finance, manufacturing, utilities and other domains.

Issues
• Self-adaptive security requires strong and business-driven Governance, Risk and Compliance analysis to help systems understand what really must be monitored and reacted to.
• Seamless communication between security devices is needed to ensure contextual information is shared across internal silos to enable the automated configuration changes that strengthen security and block attacks before they happen.
• Self-adaptive security often relies on human intelligence for confirming adaptive actions on the fly after automated risk detection. This is vital for avoiding any overreaction to false positive signals, which may impact the business in unexpected ways (see Amplified Intelligence in the CxO agenda).
• The increased focus on rapid detection and response should not lead to decreased protection levels. Protection levels should, instead, be more targeted and more reactive. As such, end-to-end security, ‘Zero trust’ security architectures and very high security zones for critical applications (such as ‘Application Resource Islands’) are often set up to complement self-adaptive security.

Smart Machines 15 5 29 9 Emerging 2019+ Transformational Concept

Definition
Smart machines refer to systems embedded with cognitive computing capabilities that are able to make decisions and solve problems without human intervention. They perform activities and tasks traditionally conducted by humans, boosting efficiency and productivity.

Applications
• Autonomous robots or vehicles — reshaping transportation, logistics, distribution and supply chain management
• Expert systems — emulating the decision making capabilities of a human to solve problems that typically require expert input in sectors such as automotive, consumer electronics, healthcare and industrial
• Intelligent virtual assistants — such as avatars, which provide information and service assistance to customers
• Sensors — collecting data about our physical environment without direct human intervention

Trajectory

Impact
• Increase efficiency and productivity
• Raise earnings and profit margin potential
• Offer a viable alternative to unskilled labo while augmenting demand for skill labor
• Complement tasks that cannot be substituted by computerization

Evolution
• Prototype autonomous vehicles, advanced robots, virtual personal assistants and smart advisers already exist.
• The explosive growth of sensor-based data will provide them with more context about the physical world.
• They will evolve to work more autonomously thanks to cognitive computing, advanced algorithms and artificial intelligence.
• Over time, more machines will enter our lives and they will become better, faster and cheaper, according to Moore’s law.
• Advances in artificial intelligence, speech recognition and machine learning mean knowledge work can now be automated.
• Machines will make increasingly significant business decisions over which humans have decreasing control.
• As such, smart machines have the potential to significantly impact the business dynamics of at least one-third of the industries in the developed world in the future.

Issues
• CIOs must make the business aware of the risks and opportunities as smart machines work more autonomously in support of business goals.
• There are real concerns that smart machines will replace both white and blue collar jobs in industries ranging from manufacturing and warehousing to shipping.
• The impact on society will need to be understood and appropriate action may need to be taken in areas such as taxation.

Swarm Computing 39 43 54 40 Emerging 2019+ High Concept

Definition
Also known as swarm intelligence or hive computing, swarm computing refers to massively distributed, self-organizing systems of agents that work collaboratively towards a defined outcome. Each agent within the system has a simple set of rules to follow and only interacts with its local environment. The aggregate behavior of the agents leads to the emergence of ‘intelligent’ global behavior. With the number of nodes comprising the Internet of Everything (IoE) predicted to rise and many individual nodes likely have limited compute capabilities, each would be complimented by connection to other objects in a community — thus creating an IoE swarm.

Applications
• Optimizing logistic chains and transportation
• Controlling driverless vehicles, optimizing journey times and road usage
• Coordinating the operation of complex infrastructures such as smart cities and distributed power grids
• Predicting the behavior of complex systems, such as transport or logistic networks
• Providing surveillance in military scenarios
• Delivering sensor network security
• Managing crowds
• Enabling opportunistic collaboration
• Managing the economics of participation by helping to engage resources, encourage contribution and optimize services

Trajectory

Impact
• Creates a dynamic eco-systems of cyber-physical devices, each adding to the collective capability and insight
• Allows operations and interactions to adapt according to context
• Improves efficiency and reliability of service provision through:
— Enabling ad-hoc collaborations, which help built service networks
— Optimizing delivery schemes and communication patterns, which allow information and services to be shared and exchanged
— Creating reliability and dependability from volatile resources, which help manage uncertainty
• Allows work to be distributed across simpler devices (or robots)
• Complements other forms of artificial intelligence
• Provides a significant step toward massively distributed computing models

Evolution
• The notion of swarm computing first emerged in the 1990s.
• Research into algorithms and simulations began shortly afterwards.
• Swarm computing was first used for logistics and simulations.
• In time, the miniaturization of mobile hardware made swarms of micro-robots feasible
• Currently, swarm computing is closely linked with robotics, the Internet of Things (IoT) and distributed cloud models.
• In the long-term, nano-robots swarms may, for example, prove useful in medicine.

Issues
• Agent-based programming is complex and skilled practitioners are not easy to come by at this present time.
• Integrating swarms with other centralized control mechanisms is also complex.
• Security is a huge concern, especially if individual robots within a swarm are under the primary control of different individuals or organizations.
• Communication protocols need to be standardized to enable flexible and dynamic interaction.
• There is a possibility that non-deterministic behaviors, including unexpected or out of control ‘emergent’ behaviors, may emerge.
— Swarm viruses, where swarm behaviors are influenced adversely by rogue components, may also emerge. These would have huge implications in use cases such as driverless vehicles.

Trusted Devices 79 31 72 58 Early Adoption 2017 Medium Concept

Definition
Trusted devices are terminals and software powered objects and machines that are made secure and trustworthy in order to protect data and process availability, integrity and confidentiality. They include human interaction devices such as smartphones, payments terminals as well as autonomous devices such as smarthomes and smart machines.
Trusted devices rely on high security design, hardened software and hardware, and intensive certification processes
provided by trusted third parties (notably leveraging ‘common criteria’ norms).

Applications
• Securing human communications and interactions with digital systems
• Ensuring the protection of critical infrastructures and automated process — including smart grid, connected healthcare, smart cities and transports

Trajectory

Impact
• Provides trusted terminals and smart machines secured by design and with a trustworthiness level depending on risk and process criticality that people can rely on
• Mitigates risk of hacking and fraud at a time when cybercrime is becoming commonplace

Evolution
• Defense grade technologies have already been introduced into civil terminals, notably secure smartphones for governments and CxOs.
• The major trend today is the strengthening of objects on the Internet of Things (IoT) that are involved in critical processes.
• The next evolution is focusing on securing smart, autonomous machines. After all, as we become increasingly reliant on robot surgeons, smart cars and autonomous drones, absolute trust becomes vital.

Issues
• Most trusted devices require a combination of hardware and software security since software security alone increases the risk of hijacking.
• Establishing trust in today’s increasingly connected world requires the successful integration of secure elements (Secure IoT) and management of huge numbers of identities (Next Generation Identity & Access Management and Big Data).
• It may also require embedded contextual intelligence so that hacking attempts can be detected and appropriate counter measures (such as the auto-destruction of sensitive information) can be taken automatically

Ubiquitous PIM 56 61 69 51 Emerging 2019+ Medium Concept

Definition
Currently most people use more than four different information silos to store, manage and transfer their personal information — including Google, Outlook, Facebook, Twitter, WhatsApp, local drives, USB disks, OneDrive, Dropbox and more. Ubiquitous Personal Information Management (PIM) is a new approach to storing personal information where all information is stored in a unified storage system but still accessible and consumable by multiple applications. The types of information most suited to ubiquitous PIM approaches include messages (email, social network posts, public website responses), contacts, documents (texts, pictures, video) and calendars.

Applications
• Enterprise Social Networks — allowing all applications to use the same storage layer and exchange/share all information needed
• Collaboration — providing diverse collaboration tools access to the same information, with each tool tailored to the user’s level of expertise and personal preferences
• Business communications — unifying communications so users only need to focus on the dialog and not worry about the technology

Trajectory

Impact
• Reduces complexity and eliminates vertical information silos
• Increases usability by making information more relevant and consistent across a multitude of applications
• Cuts energy consumption as information no longer needs to be copied around and kept in sync
• Unlocks individuals from reliance on multiple dedicated social networks, collaboration and communication tools
• Allows individuals to select the networks and tools that suit them best
• Opens the door to a broad range of novel products and services
• Is aligned with the Internet of Things (IoT)

Evolution
• Local file systems with applications began sharing more and more information.
• Mixed local and cloud-based storage solutions emerged, with some sharing information, but more of them exchanging it.
• Mixed local, cloud and ubiquitous PIM storage solutions then started to appear, initially combining contacts, communication, collaboration and documents through adapters
• Soon, applications will leverage global ubiquitous storage, with information exchange between ubiquitous PIMs standardized
• In the future, people will no longer need to worry about their information and a whole new domain of IT applications will start to flourish.

Issues
• An enormous amount of work that needs to be done to realize ubiquitous PIM, but only a few groups and individuals are working on new ways to store information.
• In fact, there are huge gaps in the theoretical know-how needed.
• This current situation will remain unchanged until the fragmentation of the current data exchange systems become more evident.

Virtual Assistants 67 49 68 55 Adolescent 2018 Medium Concept

Definition
Virtual assistants are software agents that perform services or tasks on our behalf. They understand queries and can answer them in a natural language. They exploit artificial intelligence, natural-language processing, machine learning, voice processing and reasoning and knowledge representation to make human-machine interactions simpler, more natural and more appealing. The tasks and services they perform rely on written or spoken user input, context awareness and access to a variety of online sources, such as weather or traffic conditions, news, stock prices, schedules and retail prices.

Applications
• Administrative assistance — such as in-the-moment advice, querying and reminding, supporting both personal and business events
• Ongoing expert assistance — including longer-term projects or expert work where their learning capabilities mean the more context they’re exposed to, the better their assistance becomes
• Customer service assistance — including in retailing, banking, insurance and telecommunications where they supplement or supplant human customer-service representatives
• Data mining of large structured and unstructured data sets — discovering patterns and anomalies, proactively identifying problems, spotting opportunities and supporting decisionmaking
• Predictive expert assistance — anticipating events and taking action before events occur
• Virtual personal shopping assistants — learning, predicting and serving consumers’ tastes, needs and desires to help optimize their purchases

Trajectory

Impact
• Decrease cost and increase efficiency and customer satisfaction in customer service desks while unifying corporate image across different channels
• Enhance application usability and improves information access tasks
• Are expected to manage more and more of our increasingly complex digital lives in the future
• May, in that case, transform customer engagement models with organizations needing to interact with customers through their assistant rather than directly; this would mean interactions need to be clear and concise — and would be significantly less susceptible to emotive advertising

Evolution
• In the 2000s, search engines evolved to include more context, refining results to make them more relevant.
• Natural Language Processing (NLP) systems evolved to allow users to interact with a computer in conversational language.
• More computing power became available, meaning natural language could be processed cost-effectively.
• The rise of virtual assistants such as Siri signaled search evolving toward a more personalized, interactive service and a gradual shift to an ecosystem of services mediated by a powerful software assistant.
• The virtual assistant will next be brought into the cloud, making it present on multiple devices: on users’ bodies and in their offices, homes and vehicles.
• Future assistants will be able to detect exactly where users are.
• Advances in sensors (for example, microphones, cameras, accelerometers  and GPS in smart phones) will provide additional context for that assistant to eleverage.
• Virtual assistants are also learning how to detect emotions via voice analysis or facial expressions, whether the user was moving, stationary or in a vehicle.
• Real-time learning capabilities will ensure assistants remain current, as long as they have access to relevant data sets, or allow them to branch out into new fields.

Issues
• The increasing fragmentation and complexity of our personal data reduces the level of assistance a virtual assistant can currently provide.
• Ubiquitous Personal Information Management (PIM) will help them interpret all our digital information, but they currently need adapters to connect to diverse data silos such as Outlook, Gmail, Microsoft Live, Facebook, Twitter, LinkedIn and even local file systems.
• The user needs to feel in control (for instance of where their data is used and by whom) and to be able to trust the assistant to act in their best interest.
• Virtual assistants that are helping users purchasing things online will have the power to boost certain firms and deprive others.
• The on-going obsolescence of the human worker by computers and artificial intelligence applications may lead to society raising concerns about virtual assistants as they become more advanced.

Wearable Computing 65 21 46 41 Early Adoption 2017 High Concept

Definition
Wearable computing refers to miniature electronic devices with integrated sensing, computing and communication capabilities that are worn on the body. They leverage the wearer’s context — detected by embedded sensors — to deliver either general or specific services that enable the wearer to act in real time based on the information they provide. Although most popular wearables today are smart watches, wearables can be found in different places of our body:
• Attached to our wrist — such as smart watches, bracelets and wrist bands
• Around the head — including headbands and helmets
• In front of the eyes — such as glasses, contact lenses, bio-augmentation
• Around the ears — as earphones
• Implanted or embedded chips — including hearing aids
• On or in the hands — such as gloves or digital pens
• Smart clothing and textiles — including bandages, t-shirts, jackets, socks, bras, belts and shoes
• Embedded in jewelry — including rings and earrings
• On the skin — as tattoos

Applications
• Cross industry
— Security — enhancing authorization, authentication and tracking
— Field services and maintenance — integrated with SCADA to help with trouble shooting, for instance
— Payments and expenses — improving mobile capabilities
— Training — providing support and recording and sharing activities
— Condition monitoring — collecting health or environmental information
— Direct interaction with smart objects — initiating cloud-based services
— Information — simplifying access to resources and sharing knowledge
• Industry specific
— Healthcare — training doctors, assisting surgery, monitoring patients’ and doctors’ health, recording treatment, supporting and monitoring medication regimes
— Retail — sharing product information, merchandising, warehousing and stock management
— Military and homeland security
— monitoring activities, recording operations, navigation and identity recognition
— Transport — navigation, tracking, measuring drivers’ health
— Sport — monitoring vital signs, comparing results, adjusting to environment and recording activities

Trajectory

Impact
• In combination with cognitive computing, has the potential to enhance human abilities
• Allows customer service representatives to understand individual’s desires and tastes
• Enables the Body Area Network concept — enhancing the overall picture of the wearer’s context while the wider IoT amplifies the device’s knowledge of the wearer
• Improves performance and cuts cost in field service and assisted maintenance
• Opens up new business opportunities:
— Delivering health insights for marketers to tap into
— Offering incentives in exchange for wearable usage and data
— Connecting mobile payments and customer loyalty
— Addressing novel customer problems that were previously out of reach of the brand

Evolution
• Wearables were initially used in military, healthcare and medicine.
• They are now focused mainly on wellbeing and activity monitoring for the B2C market.
• Interest is broadening to new domains in the B2B market where smart watches, digital badges and smart bands are likely to be of most interest initially followed by smart glasses, emerging intelligent textiles and embedded accessories.
• Shipments of wearables are expected to reach more than 60 million by 2017.
• Spending on wearable technology is projected to reach nearly $20 billion in 2018.
• Wearables may become a fully-integrated part of our personal and professional lives in the long term, providing a more discrete alternative to mobile while also enhances mobile with additional insight.
• Some see wearables as the next step toward ambient computing.

Issues
• Interaction is tuned to the device’s size, purpose and ergonomics.
• Devices would benefit from being able to automatically account for wider environmental surroundings.
• But advances in wearables depends on other technological advances — including in sensors, displays, batteries, communications and augmented reality.
• The effects of materials on the human organism (allergic reactions, for instance) and radiation must be manageable.
• Power management and heat dissipation is challenging in such small devices.
• Energy — along with a failover strategy — may need to be available during a complete usage cycle in some use cases.
• In critical situations, such as medical tasks, wearables and their users must abide by legal constraints.
• Wearables need to be transparent on information transmission, storage and processing, particularly where data is shared with a wider ecosystem.

Web-scale
Computing
77 17 76 22 Early Adoption 2017 High Concept

Definition
Also known as web-scale computing, hyperscale is a large, distributed, grid computing environment that can scale out efficiently as data volumes and workload demands increase — sometimes exponentially. Compute, memory, networking, and storage resources are added quickly and cost-effectively.
Often built with stripped down commercial hardware, hyperscale optimizes hardware.
Potentially millions of virtual servers may work together to accommodate increased computing demands without, however, requiring additional physical space, cooling or electrical power.
With hyperscale, total cost of ownership (TCO) is then typically measured in terms of high availability and the unit price for delivering an application and/or data.

Applications
• Enabling cloud, distributed storage and more across the web giants’ large distributed sites
• Supporting businesses that must handle extremely high volumes of data and/or process millions of transactions with minimal downtime — including banks, retail, mining and oil exploration, healthcare and pharmaceuticals
• Supporting emerging technologies — such as cognitive computing and the Internet of Everything

Trajectory

Impact
• Provides the robust and agile environment needed to support tomorrow’s intensive data processing, allowing organizations to maximize the value of their growing volumes of data
• Offers a common, scalable platform that easily adapts to the ever-shifting business and technology landscape
• Delivers space and power savings through its containerized data center units
• Requires fewer commercial servers, cutting hardware and administration cost
• Has the potential to transform how modern enterprises consume and manage IT

Evolution
• Public cloud giants such as Google, Facebook and Amazon first developed the technology to sustain their dramatic growth rates and deliver the agility they required without compromising service availability or quality.
• Soon other organizations also began to take advantage of that open source software.
• Hyperscale is now being explored by other types of businesses, including those with smaller and traditional enterprise IT environments.
• Further technological advances are paving the way for faster, cheaper and bigger hyperscale systems.
• It is anticipated that hyperscale may eventually be adopted by around half global enterprises.

Issues
• Hyperscale modifies the IT roles and skills needed by an organization.
• Any serious growth in computing — including hyperscale — may challenge energy resources.

WebRTC 80,5 12 40 42 Early Adoption 2016 High Concept

Definition
A collaborative effort by IETF and W3C, WebRTC is a free, open web standard that enables platform-independent, web-based, plugin-free Real Time Communications (RTC) through standard browsers and on mobile devices.
In essence, the WebRTC standard defines media processing software for voice and video embedded in browsers. It’s extensible and provides a standard API that aligns with HTML5 along with a mechanism for passing session data between parties in communications sessions.
Commercial implementations are available and differentiate themselves through defining signaling or collaboration models and enabling multi-participant sessions.

Applications
• Browser-to-browser communication — including voice, video chat, conferencing, screen sharing, customer engagement, entertainment, gaming and file transfer and sharing
• Enterprise communication — such as click-to-call, contact centers, unified communications and modern team collaboration
• Education and training — including virtual classrooms and synchronous learning
• Distributed messaging and coordination — allowing humans and machines can interact in a more natural and transparent way

Trajectory

Impact
• Simplifies deployment and reduces cost of browser-based communication solutions by eliminating the need for proprietary plug-ins and apps
• Standardizes underlying transport protocols and security mechanisms
• Allows any connected device with a WebRTC-enabled browser to become a communication device
• Makes voice and video more readily available in browser-based applications
• Enhances customer engagement and increases revenues by opening the door to rich interactions
• Allows humans and machines to interact in a more natural and transparent way
• Opens the door to new services that Telco carriers — who are threatened by over-the-top voice services — can actively participate in

Evolution
• WebRTC was initially born in 2011 as a free, open project sponsored by Google for enabling RTC communications in different browsers.
• Firefox, Opera and other browsers began to support WebRTC just over a year later.
• WebRTC is supported by almost all major desktop and mobile browsers and used by a growing numbers of consumer services, such as Facebook Messenger, and modern enterprise collaboration services (such as Unify’s Circuit) to implement voice and video communications.
• WebRTC is part of HTML5 and, as such, it’s expected to be widely supported going forward.
• Potentially five billion devices could be WebRTC-enabled by 2018, with embedded devices leading novel WebRTC-based applications.
• Peer-to-peer WebRTC-based video services will commoditize quickly.
• WebRTC is expected to replace existing protocols in the long term.
• Enterprise grade multi-participant WebRTC and integrated collaboration services will remain compelling for the next five years.

Issues
• The WebRTC 1.0 standard has not been finalized yet with the community divided over some aspects of it.
• The lack of standards for some higher level services could lead to compatibility and/or interoperability issues and market fragmentation, while offering differentiation for commercial products.
• Adoption has been slowed by lack of support from some major browsers.
• Integration with other non-web based RTC communication platforms requires additional software.
• WebRTC could lead to new kinds of security threats that might require new and innovative approaches.

Wireless Power 80 47 85 47 Adolescent 2018 Low Concept

Definition
Wireless power describes the transmission of electrical power without solid wires, using electromagnetic fields instead. There are two types of wireless power: near-field charging that uses inductive or capacitative charging and far-field or radiative charging that uses beams from electromagnetic devices.

Applications
• Wearables — including smart watches
• Health devices — including hearing aids and pacemakers
• Electric transportation — including cars and buses
• Customer gadgets — including smartphones and tablets
• Industry — including assets in factories and warehouses

Trajectory

Impact
• Allows electrical devices to be powered where interconnecting wires are inconvenient, hazardous or simply not possible
• Solves energy storage issues for the electric car, while increasing its range and reliability
• Opens the door to a uninterrupted, pervasive computing environment where energy availability is critical

Evolution
• Induction charging is not a new technology, but its usage is restricted to very few devices.
• The technology has been evolving to provide higher levels of charge, at greater distances.
• Standardization seems to be improving, with the most important standards converging in the AirFuel Alliance.
• The explosion of wearables and sensor networks, and their limited energy capacities, has made wireless power transmission a more interesting proposition.
• Increasing capacities are taking it to wider domains, including transportation (for electric vehicles) and healthcare (for medical devices).
• Tests are underway with special road lanes wirelessly transferring power to electric cars while they drive.

Issues
• Energy efficiency needs to be optimized further by eliminating potential energy losses.
• Energy management would benefit from being simplified.
• There are currently several standards, limiting interoperability.
• Security concerns are limiting its use in certain environments.
• Social fears are limiting its uptake, though concerns about its impact on health have no scientific base.

Semantic technologies 72 26 44 23 Early Adoption 2017 High Concept

Definition
Semantic technologies encompasses a diverse set of technologies aimed at helping machines to make sense of large or complex data sets without being supplied any knowledge about that data. In essence, they bring structure and meaning to information, often by providing machine readable metadata that is associated with humanly readable content specifying its meaning.

Applications
• Natural-language processing (NLP) — processing unstructured text content
• Data mining technologies — uncovering patterns (trends and correlations) within large sets of data
• Artificial intelligence systems — using reasoning models to answer complex questions
• Classification technologies — employing heuristics and rules to categorize data
• Semantic Web — allowing programs to fetch specific information and answer non-obvious queries
• Linked Data — a subset of the Semantic Web and way of publishing data so they are interlinked and more useful, enabling automatic reading by computers
• Semantic data integration — enhancing analysis and decision making by combining heterogeneous data sources Semantic search technologies — allowing people to locate information by concept

Trajectory

Impact
• Enable semantics-based technology automation by providing that meaning explicitly, so machines don’t need to derive it from within the data
• Help humans and machines understand and communicate with each other at the same level as people do among themselves
• Allow technology to automatically use services on behalf of a user
• Bring enterprise data integration to the next level by integrating heterogeneous sources and augmenting corporate data with publicly available databases
• Give computers a better way to represent, exchange and manipulate knowledge
• Increase agility by dealing with knowledge instead of just data
• Allow computers to learn by themselves (machine learning)
• Make data collected for a given purpose usable in other contexts

Evolution
• The RDF (which stores the semantic metadata) and OWL (which defines the ontologies which, in turn, specify information meaning and relationships) were introduced and later standardized.
• The SPARQL semantic query language was introduced and later standardized.
• Tim Berners-Lee published his thoughts on Linked Data on the W3C website
• The Semantic Web at a global scale is still a dream, but related ideas, concepts, standards and technologies are being used on smaller scale to provide semantically-enriched services.
• Companies are providing commercial support for semantic solutions and using them in production.
• Existing and legacy SQL-based data integration solutions are being turned into a semantic data cloud.
• Standardization efforts for transformations (RDB2RDF) are enabling more valuable information to be retrieved out of the existing data maze by combining it with other data sources.
• Companies are increasingly providing semantic search engines heavily based on NLP.

Issues
• Ontologies are not easy to design and their relevance can always be debated.
• Gaining consensus around ontologies is difficult, even in very specific fields. Discarding spurious information in ontologies is difficult.
• Data providers usually generate revenue from advertisements that have to be seen by human beings so may have no interest in providing semantically-enriched information.
• A means for automatically establishing correspondences between vocabularies is needed for when different data sources use different concept definitions.
• The Semantic Web is still unrealized due to the size and complexity of the ambition.

The radar diagram provides a pictorial view of our findings, allowing you to quickly understand how disruptive emerging technology is likely to be and the actions you might consider taking. Polar co-ordinates depict the likely time to impact your business along with the potential size of the impact, while colors represent the current maturity of each topic.

Radar View

Top