Last year about this time, we had the opportunity to discuss the state of HPC and the Aurora supercomputer with Rick Stevens and Mike Papka of Argonne National Lab. In the run up to SC24, we are delighted to do the same! Rick and Mike kindly carved out some time to join us for another wide ranging discussion.
We discuss Aurora, Exascale, AI, reliability at scale, technology adoption agility, datacenter power and cooling, cloud computing, quantum computing.
We’d like to encourage you to also listen to episodes 15 and 16 where we discuss AI in science with Prof. Stevens, and epsideo 75 referenced above, just before SC23.
Rick Stevens is Argonne’s Associate Laboratory Director for the Computing, Environment and Life Sciences (CELS) Directorate and an Argonne Distinguished Fellow. He is also a Professor of Computer Science at the University of Chicago. He was previously leader of Exascale Computing Initiative at Argonne.
Michael Papka is a senior scientist at Argonne National Laboratory where he is also deputy associate laboratory director for Computing, Environment and Life Sciences (CELS) and division director of the Argonne Leadership Computing Facility (ALCF).
We discuss the state of Quantum Information Science with our special guest Dr. Travis Humble, a global authority on the subject, director of the Quantum Science Center, a Distinguished Scientist at Oak Ridge National Laboratory, and director of the lab’s Quantum Computing Institute. QSC is a partnership funded by Department of Energy comprised of leading academic institutions, National Labs, and corporations. Dr. Humble is editor-in-chief of ACM Transactions on Quantum Computing, Associate Editor for Quantum Information Processing, and co-chair of the IEEE Quantum Initiative. He also holds a joint faculty appointment with the University of Tennessee Bredesen Center for Interdisciplinary Research and Graduate Education. Please join us for an insightful discussion of quantum technologies and their impact on supercomputing and scientific discovery.
2023 Year in Review is our annual special edition as we look back at one of the more eventful years in recent history for HPC, AI, Quantum Computing, and other advanced technologies. The list below includes time stamps (in minutes and seconds) and the associated topic in the podcast.
02:00 – HPC
03:45 – AI
08:03 – Metaverse
12:01 – Chips, GPUs, Accelerators 14:00 – GPU Competition
14:00 – GPU Competition
15:46 – Open Source
17:54 – Aurora Supercomputer 20:21 – TOP500 20:55 – Cloud in TOP10 21:53 – China
24:15 – Europe
25:55 – Quantum Computing
30:12 – Photonics
31:35 – Cryptocurrencies
In this episode of Industry View, we are delighted to have a rare opportunity to catch up with none other than Pete Ungaro, long time luminary and admired leader in HPC/AI. Mr. Ungaro is a globally recognized technology executive, among the “40 under 40” by Corporate Leader Magazine in 2008, and CEO of the year by Seattle Business Monthly for the year 2006. He was most recently SVP/GM of High Performance Computing (HPC), Mission Critical Systems (MCS), and HPE Labs at HPE. Previously, he was president and CEO of Cray Inc. until its acquisition by HPE. Prior to joining Cray in 2003, Mr. Ungaro served as Vice President of Worldwide Deep Computing Sales for IBM.
In this episode of Industry View, we cover the Cray journey as it became the clear winner in exascale systems, the HPE acquisition, the challenges of delivering a new extreme-scale system during COVID, a look at HPC software, storage, power and cooling, and quantum computing, the opportunities and challenges of AI, and the geopolitics of high tech.
What’s the latest in quantum computing? Special guest Bob Sorensen of Hyperion Research joins us again to discuss market growth, customer sentiment, recent advances in noise management, applications, and the geopolitics of quantum computing.
In the The Messenger Lectures in 1964 at MIT, Richard Feynman said “On the other hand, I think I can safely say that nobody understands quantum mechanics. … Do not keep saying to yourself, if you can possibly avoid it, ‘But how can it be like that?’ because you will get ‘down the drain’, into a blind alley from which nobody has escaped. Nobody knows how it can be like that.”
Why is that? And can the teaching and understanding of Quantum Mechanics be simplified without loss of accuracy or mathematical rigor? For the answer, you have come to the right podcast!
Bob is Chief Scientist at Quantinuum, Distinguished Visiting Research Chair at the Perimeter Institute for Theoretical Physics, Emeritus Fellow at Wolfson College Oxford. For the previous two decades, he was Professor of Quantum Foundations, Logics and Structures at the Department of Computer Science at Oxford University, where he co-founded and led a multi-disciplinary Quantum Group that grew to 50 members and supervised close to 70 PhD students. He pioneered Categorical Quantum Mechanics, ZX-calculus, DisCoCat natural language meaning, mathematical foundations for resource theories, Quantum Natural Language Processing, and DisCoCirc natural language meaning. His work has been headlined by various media outlets, including Forbes, New Scientist, PhysicsWorld, ComputerWeekly. He’s also a musician and painter.
Kerstin Kleese van Dam, Gabriella Carini, and Meifing Lin of Brookhaven National Laboratory (BNL) join Shahin and Doug to discuss all things Quantum, covering Quantum Sensing, Quantum Networking, and Quantum Computing. We also get a glimpse of BNL and its global leadership across a wide range of research that it conducts.
“We do not expect a ‘transistor moment’ yet, where a particular approach to quantum computing would break away. Superconducting, Trapped ion/atom, Photonics, Electron spin, Topological, etc. and possibly combinations of them will continue to receive significant R&D investment.” – Shahin Khan
Highlights from the recent Hot Chips conference with discussions of UCIe and why it could cause a ripple effect in the industry, Moore’s law and 3D packaging, Silicon Photonics, inference in the device or in the data center, silicon for the edge, CXL, and code generation. This is followed by an update on Quantum Computing following two important papers on quantum machine learning and unstructured NP-complete problems. The field continues to be in its infancy while making rapid and significant progress. We end with a review of the dedication ceremonies for the Frontier exascale system. Join us.
The HPC User Forum held a special event at Oak Ridge National Laboratory last week, complete with an opportunity to get a viewing of the facilities (not quite a tour) and discussions of Exascale Computing and beyond. Doug Black was on the scene and we discuss what all went down. Of special note is the staffing challenges of HPC sites, and the brewing strategy about how future leadership computing systems would look like. This is an important topic that we have covered with our guests in previous episodes and some patterns are emerging as we continue to analyze the future of supercomputing hardware and software.
Major news since our last (double edition) episode included what’s billed as the fastest AI supercomputer by Google, price hikes on chips by TSMC and Samsung, visualization of a black hole in our own galaxy, and IBM’s ambitious and well-executed quantum computing roadmap. We discuss how an AI supercomputer is different, an unexpected impact of chip shortages and price hikes, what it takes to visualize a black hole, and what IBM’s strategy looks to us from a distance.
Jack Dongarra is a leader in supercomputing technologies, parallel programming tools and technologies, and linear algebra and numerical algorithms. He holds appointments at the University of Tennessee, Oak Ridge National Laboratory, and the University of Manchester, and is the recipient of several awards and honors.
In a wide ranging discussion, we cover the Turing Award, TOP500, the state of HPC benchmarks, China’s Exascale systems, and future directions in algorithms. We also talk about future of supercomputing and AI systems, reminisce about a period where a proliferation of system architectures provides a fertile ground for experimentation, and discuss whether we are entering a similar era now. This is another episode you’d want to listen to more than once!
A new segment, Top of The News, covers top HPC stories, this time Federal funding for PsiQuantum and Global Foundries, AMD’s proposed acquisition of Pensando, and Fujitsu’s cloud offerings. The main topic is storage, which we will cover in multiple episodes going forward, including a very special guest next week. This week we discuss Computational Storage, Erasure Coding, Storage-Class Memory, and Data-Centric AI.
This is part 2 of a special 2-episode discussion of AI in Science with Rick Stevens, Associate Laboratory Director and leader of Exascale Computing Initiative at Argonne National Laboratory and Professor at University of Chicago. In addition to the new ways AI can help advance science, we also discuss ethics, bias, robustness, security,and explainability of AI, and whether AI can replace scientists. We end with a snapshot of Quantum Information Science (QIS), a promising area albeit in its earlier stages of development compared to AI.
Web3, IoT/Edge, AI, HPC, Blockchain, Cryptocurrencies, GPUs and Quantum, Cyber Risk, 5G, and BioTech point to opportunities and threats. Why are there so many big technology trends right now? Doug and Shahin discuss a framework to help make sense of why these trends point to important changes, how these trends are related, and what they mean individually and together.
In episode 9 of the @HPCPodcast, we cover the recent wave of news about quantum computing: melding of quantum & classical computing, error correction, financial & investment announcements, M&A and partnerships, and the connection between quantum computing and HPC.
Special guest Bob Sorensen of Hyperion Research joins the crew to share the results of his international study to track down the size of the Quantum Computing market. Bob unveiled these results at the Q2B-21 conference last week.
What Happened at the SC21 Supercomputing Conference? InsideHPC “pulled together a quartet of HPC thought leaders from the technology analyst and national lab communities to gather their reflections”. See the video in this article: Thoughts on SC21: Exascale, TOP500, Diversity in HPC, Quantum, the Metaverse
In this video interview, Shahin Khan talks about the strategic shift among core-cloud-edge and why compute at the edge, HPC at the edge, will seize the ascendancy in 2021.
From SiliconANGLE theCUBE: Technology analyst Shahin Khan discusses the intersection of HPC with key industry trends such as 5G, IoT, edge, blockchain AI and quantum computing.
Can you cover, in 30 minutes, the technical basics of IoT, 5G, HPC, AI, Blockchain (including cryptocurrencies and smart contracts), and Quantum Computing?
Yes you can! Well, I hope so, anyway, since that is what we did at the HPC-AI Advisory Council Conference – Stanford University edition. These technologies cannot be ignored. They drive the digital infrastructure of the future enterprise. We believe it is a must to know enough all of them, in one place, so they can inform your strategy, corporate and product roadmap, and your narrative.
OrionX works with clients on the impact of Digital Transformation on them, their customers, and their messages. Generally, they want to track, in one place, trends like IoT, 5G, AI, Blockchain, and Quantum Computing. And they want to know what these trends mean, how they affect each other, and when they demand action, and how to formulate and execute an effective plan. This talk is a somewhat technical summary of such discussions. Needless to say, if that describes you, please let us know.
Here’s the Slide Deck
Please take a look at the slides below and let us know what you think, or if you’d like us to take you through them.
Shahin is a technology analyst and an active CxO, board member, and advisor. He serves on the board of directors of Wizmo (SaaS) and Massively Parallel Technologies (code modernization) and is an advisor to CollabWorks (future of work). He is co-host of the @HPCpodcast, Mktg_Podcast, and OrionX Download podcast.
The OrionX Research team welcomes special guest Dr. Max Henderson of Rigetti Computing to discuss the intersection of AI and Quantum Computing. Topics of discussion include reformulation, feature extraction, linear solvers, optimization, programming languages and development environments, quantum inspired algorithms, underlying technologies for various quantum computers, and short- and mid-term outlook for AI and quantum computing.
OrionX is a team of industry analysts, marketing executives, and demand generation experts. With a stellar reputation in Silicon Valley, OrionX is known for its trusted counsel, command of market forces, technical depth, and original content.
Digital Transformation (DX) is coming and it will impact you and your customers. It is not just another IT upgrade, but a profound shift in what your clients and customers want, and a shift in what you offer them. It is a shift in your business model.
In a new report, OrionX goes deeper to look at the dimension of DX and to characterize this trend. OrionX clients automatically receive the full report. Here is an excerpt.
DX Started with Digitization
The Information Revolution is already changing all aspects of society, blending several technology shifts in the process and creating new ones. Digitization is at the root of it, and as it progresses, its transformative powers will change everything.
Whereas the Industrial Age helped humans to surpass their mechanical strengths, Information Age is helping humans exceed their cognitive strengths. Eventually, all organizations must transform to exploit new opportunities, repel new threats, or simply to operate in a new world.
DX is a Board Room Imperative
DX is a threat unless you turn it into an opportunity. It is the new world order, and a necessary topic for executive level attention, sponsorship, and decision.
DX is Multi-Disciplinary
What makes DX especially challenging is that it must blend several technology shifts at once; not all of them, but the right ones. Technology disruptions caused by IoT, 5G, Cloud, AI, HPC, Blockchain, Cryptocurrencies, Smart Contracts, Cybersecurity, Quantum Computing, Robotics, 3D printing, Virtual and Augmented Reality, and BioTech are all eligible to impact your and your customers.
[…]
The emerging Information Age and its global impact is the umbrella trend that drives the OrionX Research, Market Execution, and Customer Engagement work. Digital Transformation is the key market trend and vehicle towards it.
Shahin is a technology analyst and an active CxO, board member, and advisor. He serves on the board of directors of Wizmo (SaaS) and Massively Parallel Technologies (code modernization) and is an advisor to CollabWorks (future of work). He is co-host of the @HPCpodcast, Mktg_Podcast, and OrionX Download podcast.
The OrionX Research team is back with Stephen, Shahin, and Dan going over the state of the market in Cryptocurrency, Blockchain, and Quantum Computing.
OrionX is a team of industry analysts, marketing executives, and demand generation experts. With a stellar reputation in Silicon Valley, OrionX is known for its trusted counsel, command of market forces, technical depth, and original content.
There are more Things than anything else! And they’re going online in droves, adding capability but also vulnerability, and making the Secure Internet of Things (IoT), the super set of the technology trends. Secure IoT is the place with the most interdisciplinary, and therefore, the most difficult challenges. If you believe, as you should, that no-compromise cybersecurity is table stakes, then IoT means “secure IoT” and IoT by itself becomes the ultimate hashtag soup. Here is why.
IoT and Security are End to End
IoT and security require an end-to-end mindset. Today’s solutions are fragmented. Integrating them is hard and requires blending of many technologies and algorithms. It requires a team with a very broad set of expertise who have managed to understand each other and employ each other’s strengths.
#IoT
Things come in many shapes and sizes. We have to make a distinction between Small-t things, sensors and micro devices, Big-T Things, which often form meta things or Systems, or All-caps THINGS, meta systems and the so-called Distributed Autonomous Organizations (DAOs). Managing them in a coherent manner requires an expertise rubber band that has to stretch across from the microscopic to the astronomic:
Size: Tiny Things to large Things
Data: Simple signals to structured data to context-sensitive streams
Mobility: stationary Things to Things that are carried by people to self-propelled Things
Autonomy: Things that are controlled by people to the so-called DAOs
#Cybersecurity
Security is not just a technology and practice but a non-negotiable requirement that cuts across all aspect of tech. Small-t things require a whole a new approach to device and data security all the way up the supply chain, a difficult technological and operational task an area of intense innovation, for example by blending AI, IoT, and Cybersecurity disciplines or pursuing entirely new approaches. Big-T Things remain hard to secure but have more built-in resources and processing capability, making them eligible to be handled like existing IT systems.
Physical #Security
Things, scattered around, are exposed to physical tampering so it is important to physically secure them and detect any unauthorized physical access or near-field or far-field monitoring. As is often said, “security is security” so physical security and cybersecurity must be viewed and implemented as one thing: all under the umbrella of risk and threat management and #sustainability.
#Mobility
Many Things move, either by themselves (like #drones or robots) or carried by other things (pick a form of transportation) or by people (personal device or #wearables). Mobility issues from initial on-boarding to authenticated secure transmission to data streaming and data semantics to over-the-air (OTA) updates all become critical in IoT.
#Blockchain
Distributed Things that generate lots of data, require secure authenticated communications, are possibly subject to compliance regulations, may need digital rights management (#DRM), or need verifiable delivery… are the kind of use case that Blockchain technology can address very well. IoT applications are often natural candidates. (see also the OrionX Download podcast series on Blockchain.)
#BigData
Connected devices generate data. Making sense of that data requires Big Data practices and technologies: data lakes, data science, storage, search, apps, etc.
#AI
When you have so much data, invariably you will find an opportunity for AI (see the OrionX/ai page for a repository of reports), either via #MachineLearning which uses the data, or #ExpertSystems for policy management which decides what to do about the insight that the data generates. AI framewroks, especially the #DeepLearning variety, are #HPC oriented and require knowledge of HPC systems and algorithms.
There will come a time when another emerging trend, #QuantumComputing, will be useful for AI and cybersecurity.
#Robotics
If Things are smart enough and can move, they become robots. Distributed Autonomous Objects (DAOs) use and generate data and require a host of security, processing, control, policy, etc.
#Cloud
All of this processing happens in the cloud somewhere, so all cloud technologies come into play. The required expertise will cover the gamut: app development, streaming data, dev-ops, elasticity, service level assurance, API management, microservices, data storage, multi-cloud reliability, etc.
Legal Framework
IoT provides new kinds of information extraction, manipulation, and control. Just the AI and Robotics components are enough to pose a challenge for existing legal systems. Ultimately, it requires an ethics framework that can be used to create a proper legal system all the while ensuring widespread communication so organizational structures and culture can keep up with it.
Summary
Not every IoT deployment will need all of the above, but as you try to stitch together a strategy for your IoT project, it is imperative to do so with a big picture in mind. IoT is the essence of #DigitalTransformation and touches all aspects of your organization. It’s quite a challenge to make it seamless, and you’ll need specialists, generalists, and not just analysts but also “synthesists”.
In the OrionX 2016 Technology Issues and Predictions blog, we said “If you missed the boat on cloud, you can’t miss it on IoT too”, and “IoT is where Big Data Analytics, Cognitive Computing, and Machine Learning come together for entirely new ways of managing business processes.” Later, in February of 2016, my colleague Stephen Perrenod’s blog IoT: The Ultimate Convergence, further set the scene. Today, we see those predictions have become reality while IoT is poised for even more convergence.
We will cover these topics in future episodes of The OrionX Download podcast, identifying the salient points. As usual, we will go one or two levels below the surface to understand not just what these technologies do, but how they do it and how they came about. That will in turn, help put new developments in perspective and explain why they may or may not address a pain point.
Shahin is a technology analyst and an active CxO, board member, and advisor. He serves on the board of directors of Wizmo (SaaS) and Massively Parallel Technologies (code modernization) and is an advisor to CollabWorks (future of work). He is co-host of the @HPCpodcast, Mktg_Podcast, and OrionX Download podcast.
Here at OrionX.net, our research agenda is driven by the latest developments in technology. We are also fortunate to work with many tech leaders in nearly every part of the “stack”, from chips to apps, who are driving the development of such technologies.
In recent years, we have had projects in Cryptocurrencies, IoT, AI, Cybersecurity, HPC, Cloud, Data Center, … and often a combination of them. Doing this for several years has given us a unique perspective. Our clients see value in our work since it helps them connect the dots better, impacts their investment decisions and the options they consider, and assists them in communicating their vision and strategies more effectively.
We wanted to bring that unique perspective and insight directly to you. “Simplifying the big ideas in technology” is how we’d like to think of it.
The OrionX Download is both a video slidecast (visuals but no talking heads) and an audio podcast in case you’d like to listen to it.
Every two weeks, co-hosts Dan Olds and Shahin Khan, and other OrionX analysts, discuss some of the latest and most important advances in technology. If our research has a specially interesting finding, we’ll invite guests to probe the subject and add their take.
Please give it a try and let us know what we can do better or if you have a specific question or topic that you’d like us to cover.
Shahin is a technology analyst and an active CxO, board member, and advisor. He serves on the board of directors of Wizmo (SaaS) and Massively Parallel Technologies (code modernization) and is an advisor to CollabWorks (future of work). He is co-host of the @HPCpodcast, Mktg_Podcast, and OrionX Download podcast.
Today, we are announcing the OrionX Constellation™ research framework as part of our strategy services.
Paraphrasing what I wrote in my last blog when we launched 8 packaged solutions:
One way to see OrionX is as a partner that can help you with content, context, and dissemination. OrionX Strategy can create the right content because it conducts deep analysis and provides a solid perspective in growth segments like Cloud, HPC, IoT, Artifical Intelligence, etc. OrionX Marketing can put that content in the right context by aligning it with your business objectives and customer needs. And OrionX PR can build on that with new content and packaging to pursue the right dissemination, whether by doing the PR work itself or by working with your existing PR resources.
Even if you only use one of the services, the results are more effective because strategy is aware of market positioning and communication requirements; marketing benefits from deep industry knowledge and technical depth; and PR can set a reliable trajectory armed with a coherent content strategy.
Constellation™: Research from OrionX
The Constellation research is all about high quality content. That requires the ability to go deep on technologies, market trends, and customer sentiment. Over the past two years, we have built such a capability.
Given the stellar connotations of the name Orion (not to mention our partner Stephen Perrenod’s book and blog about quantum and astro- physics), it is natural that we would see the players in each market segment as stars in a constellation. Constellations interact and evolve, and stars are of varying size and brightness, their respective positions dependent on cosmic (market) forces and your (customers’) point of view. Capturing that reality in a relatively simple framework has been an exciting achievement that we are proud to unveil.
OrionX Constellation Research Model
Types of Data
The OrionX strategy process collects and organizes data in three categories: market, customer, and product. The Constellation process further divides that into two parameters in each category:
Market presence and trends: a vendor’s presence in the segment and its ability to shape or embrace market trends
Customer needs and readiness: a typical customer’s current needs and readiness to adopt a particular product
Product capabilities and roadmap: a product’s existing capabilities and competitive standing as well as its expected enhancements and replacements in the future
This data is then analyzed to identify salient points which are in turn synthesized into the coherent insights that are necessary for a clear understanding of the dynamics at play and effective market conversations and change management.
Types of Report
The resulting content leads to five types of reports that capture the OrionX perspective on a segment. We call these the five Es:
Event: industry milestones and periodic or seasonal topics)
Evolution: historical perspective on a technology or market as a way of defining the segment and understanding the parameters that impact customer sentiment
Environment: the players (stars) in the segment that is in focus
Evaluation: how customer needs have evolved and how they evaluate (or should evaluate) the products in a segment
Excellence: how OrionX analysts see the segment and score each vendor according to six parameters in the three categories of data, two parameters in each category.
OrionX Constellation Decision Tool
Visualizing six distinct parameters, each with its own axis, in three categories cannot be done with the traditional 2×2 or 3×3 diagrams. Indeed, existing models in the industry do quite a good job of providing simple diagrams that rank vendors in a segment.
To simplify this visualization, we have developed the OrionX Constellation Decision Tool. This diagram effectively visualizes the six parameters that capture the core of technology customers’ selection process. (A future blog will drill down into the diagram.)
Sample Reports
While OrionX is not a typical industry analyst firm, we will offer samples of our reports from time to time. We hope they are useful to you and provide a glimpse of our work. They will complement our blog.
Shahin is a technology analyst and an active CxO, board member, and advisor. He serves on the board of directors of Wizmo (SaaS) and Massively Parallel Technologies (code modernization) and is an advisor to CollabWorks (future of work). He is co-host of the @HPCpodcast, Mktg_Podcast, and OrionX Download podcast.
The world of quantum computing frequently seems as bizarre as the alternate realities created in Lewis Carroll’s masterpieces “Alice’s Adventures in Wonderland” and “Through the Looking-Glass”. Carroll (Charles Lutwidge Dodgson) was a well-respected mathematician and logician in addition to being a photographer and enigmatic author.
Has quantum computing’s time actually come or are we just chasing rabbits?
That is probably a twenty million dollar question by the time a D-Wave 2X™ System has been installed and is in use by a team of researchers. Publicly disclosed installations currently include Lockheed Martin, NASA’s Ames Research Center and Los Alamos National Laboratory.
Hosted at NASA’s Ames Research Center in California, the Quantum Artificial Intelligence Laboratory (QuAIL) supports a collaborative effort among NASA, Google and the Universities Space Research Association (USRA) to explore the potential for quantum computers to tackle optimization problems that are difficult or impossible for traditional supercomputers to handle. Researchers on NASA’s QuAIL team are using the system to investigate areas where quantum algorithms might someday dramatically improve the agency’s ability to solve difficult optimization problems in aeronautics, Earth and space sciences, and space exploration. For Google the goal is to study how quantum computing might advance machine learning. The USRA manages access for researchers from around the world to share time on the system.
Using quantum annealing to solve optimization problems
D-Wave’s quantum annealing technology addresses optimization and probabilistic sampling problems by framing them as energy minimization problems and exploiting the properties of quantum physics to identify the most likely outcomes or as a probabilistic map of the solution landscape.
Quantum annealer dynamics are dominated by paths through the mean field energy landscape that have the highest transition probabilities. Figure 1 shows a path that connects local minimum A to local minimum D.
Figure 2 shows the effect of quantum tunneling (in blue) to reduce the thermal activation energy needed to overcome the barriers between the local minima with the greatest advantage observed from A to B and B to C, and a negligible gain from C to D. The principle and benefits are explained in detail in the paper “What is the Computational Value of Finite Range Tunneling?”
The D-Wave 2X: Interstellar Overdrive – How cool is that?
As a research area, quantum computing is highly competitive, but if you want to buy a quantum computer then D-Wave Systems , founded in 1999, is the only game in town. Quantum computing is as promising as it is unproven. Quantum computing goes beyond Moore’s law since every quantum bit (qubit) doubles the computational power, similar to the famous wheat and chessboard problem. So the payoff is huge, even though it is expensive, unproven, and difficult to program.
The advantage of quantum annealing machines is they are much simpler to build than gate-model quantum computing machines. The latest D-Wave machine (the D-Wave 2X), installed at NASA Ames, is approximately twice as powerful (in a quantum, exponential sense) as the previous model at over 1,000 qubits (1,097). This compares with roughly 10 qubits for current gate-model quantum systems, so two orders of magnitude. It’s a question of scale, no simple task, and a unique achievement. Although quantum researchers initially questioned whether the D-Wave system even qualified as a quantum computer, albeit a subset of quantum computing architectures, that argument seems mostly settled and it is now generally accepted that quantum characteristics have been adequately demonstrated.
In a D-Wave system, a coupled pair of qubits (quantum bits) demonstrate quantum entanglement (they influence each other), so that the entangled pair can be in any one of four states resulting from how the coupling and energy biases are programmed. By representing the problem to be addressed as an energy map the most likely outcomes can be derived by identifying the lowest energy states.
A lattice of approximately 1,000 tiny superconducting circuits (the qubits) is chilled close to absolute zero to deliver quantum effects. A user models a problem into a search for the lowest point in a vast energy landscape. The processor considers all possibilities simultaneously to determine the lowest energy required to form those relationships. Multiple solutions are returned to the user, scaled to show optimal answers, in an execution time of around 20 microseconds, practically instantaneously for all intents and purposes.
The D-wave system cabinet – “The Fridge”– is a closed cycle dilution refrigerator. The superconducting processor itself generates no heat, but to operate reliably must be cooled to about 180 times colder than interstellar space, approximately 0.015° Kelvin.
Environmental considerations: Green is the color
To function reliably, quantum computing systems require environments that are not only shielded from the Earth’s natural environment, but would be considered inhospitable to any known life form. A high vacuum is required, a pressure 10 billion times lower than atmospheric pressure, and shielded to 50,000 times less than Earth’s magnetic field. Not exactly a normal office, datacenter, or HPC facility environment.
On the other hand, the self-contained “Fridge” and servers consume just 25kW of power (approximately the power draw of a single heavily populated standard rack) and about three orders of magnitude (1000 times) less power than the current number one system on the TOP500, including its cooling system. Perhaps a more significant consideration is that power demand is not anticipated to increase significantly as it scales to several thousands of qubits and beyond.
In addition to doubling the number of qubits compared with the prior D-Wave system, the D-Wave 2X delivers lower noise in qubits and couples, delivering greater confidence in achieved results.
So much for the pictures, what about the conversations?
Now that we have largely moved beyond the debate of whether a D-Wave system is actually a quantum machine or not, then the question “OK, so what now?” could bring us back to chasing rabbits, although this time inspired by the classic Jefferson Airplane song, “White Rabbit”:
“One algorithm makes you larger
And another makes you small
But do the ones a D-Wave processes
Do anything at all?”
That of course, is where the conversations begin. It may depend upon the definition of “useful” and also a comparison between “conventional” systems and quantum computing approaches. Even the fastest supercomputer we can build using the most advanced traditional technologies can still only perform analysis by examining each possible solution serially, one solution at a time. This makes optimizing complex problems with a large number of variables and large data sets a very time consuming business. By comparison, once a problem has been suitably constructed for a quantum computer it can explore all the possible solutions at once and instantly identify the most likely outcomes.
If we consider relative performance then we begin to have a simplistic basis for comparison, at least for execution times. The QuAIL system was benchmarked for the time required to find the optimal solution with 99% probability for different problem sizes up to 945 variables. Simulated Annealing (SA), Quantum Monte Carlo (QMC) and the D-Wave 2X were compared. Full details are available in the paper referenced previously. Shown in the chart are the 50th, 75th and 85th percentiles over a set of 100 instances. The error bars represent 95% confidence intervals from bootstrapping.
This experiment occupied millions of processor cores for several days to tune and run the classical algorithms for these benchmarks. The runtimes for the higher quantiles for the larger problem sizes for QMC were not computed because the computational cost was too high.
The results demonstrate a performance advantage to the quantum annealing approach by a factor of 100 million compared with simulated annealing running on a single state of the art processor core. By comparison the current leading system on the TOP500 has fewer than 6 million cores of any kind, implying a clear performance advantage for quantum annealing based on execution time.
The challenge and the next step is to explore the mapping of real world problems to quantum machines and to improve the programming environments, which will no doubt take a significant amount of work and many conversations. New players will become more visible, early use cases and gaps will become better defined, new use cases will be identified, and a short stack will emerge to ease programming. This is reminiscent of the early days of computing or space flight.
A quantum of solace for the TOP500: Size still matters.
Even though we don’t expect to see viable exascale systems this decade, and quite likely not before the middle of the next, we won’t be seeing a Quantum500 anytime soon either. NASA talks about putting humans on Mars sometime in the 2030s and it isn’t unrealistic to think about practical quantum computing as being on a similar trajectory. Recent research at the University of New South Wales (UNSW) in Sidney, Australia demonstrated that it may be possible to create quantum computer chips that could store thousands, even millions of qubits on a single silicon processor chip leveraging conventional computer technology.
Although the current D-Wave 2X system is a singular achievement it is still regarded as being relatively small to handle real world problems, and would benefit from getting beyond pairwise connectivity, but that isn’t really the point. It plays a significant role in research into areas such as vision systems, artificial intelligence and machine learning alongside its optimization capabilities.
In the near term, we’ve got enough information and evidence to get the picture. It will be the conversations that become paramount with both conventional and quantum computing systems working together to develop better algorithms and expand the boundaries of knowledge and achievement.
Peter is an expert in Cloud Computing, Big Data, and HPC markets, technology trends, and customer requirements which he blends to craft growth strategies and assess opportunities.
Here at OrionX.net, we are fortunate to work with tech leaders across several industries and geographies, serving markets in Mobile, Social, Cloud, and Big Data (including Analytics, Cognitive Computing, IoT, Machine Learning, Semantic Web, etc.), and focused on pretty much every part of the “stack”, from chips to apps and everything in between. Doing this for several years has given us a privileged perspective. We spent some time to discuss what we are seeing and to capture some of the trends in this blog: our 2016 technology issues and predictions. We cut it at 17 but we hope it’s a quick read that you find worthwhile. Let us know if you’d like to discuss any of the items or the companies that are driving them.
1- Energy technology, risk management, and climate change refashion the world
Energy is arguably the most important industry on the planet. Advances in energy efficiency and sustainable energy sources, combined with the debate and observations of climate change, and new ways of managing capacity risk are coming together to have a profound impact on the social and political structure of the world, as indicated by the Paris Agreement and the recent collapse in energy prices. These trends will deepen into 2016.
2- Cryptocurrencies drive modernization of money (the original virtualization technology)
Money was the original virtualization technology! It decoupled value from goods, simplified commerce, and enabled the service economy. Free from the limitations of physical money, cryptocurrencies can take a fresh approach to simplifying how value (and ultimately trust, in a financial sense) is represented, modified, transferred, and guaranteed in a self-regulated manner. While none of the existing implementations accomplish that, they are getting better understood and the ecosystem built around them will point the way toward a true digital currency.
3- Autonomous tech remains a fantasy, technical complexity is in fleet networks, and all are subordinate to the legal framework
Whether flying, driving, walking, sailing, or swimming, drones and robots of all kinds are increasingly common. Full autonomy will remain a fantasy except for very well defined and constrained use cases. Commercial success favors technologies that aim to augment a human operator. The technology complexity is not in getting one of them to do an acceptable job, but in managing fleets of them as a single network. But everything will be subordinate to an evolving and complex legal framework.
4- Quantum computing moves beyond “is it QC?” to “What can it do?”
A whole new approach to computing (as in, not binary any more), quantum computing is as promising as it is unproven. Quantum computing goes beyond Moore’s law since every quantum bit (qubit) doubles the computational power, similar to the famous wheat and chessboard problem. So the payoff is huge, even though it is, for now, expensive, unproven, and difficult to use. But new players will become more visible, early use cases and gaps will become better defined, new use cases will be identified, and a short stack will emerge to ease programming. This is reminiscent of the early days of computing so a visit to the Computer History Museum would be a good recalibrating experience.
5- The “gig economy” continues to grow as work and labor are better matched
The changing nature of work and traditional jobs received substantial coverage in 2015. The prospect of artificial intelligence that could actually work is causing fears of wholesale elimination of jobs and management layers. On the other hand, employers routinely have difficulty finding talent, and employees routinely have difficulty staying engaged. There is a structural problem here. The “sharing economy” is one approach, albeit legally challenged in the short term. But the freelance and outsourcing approach is alive and well and thriving. In this model, everything is either an activity-sliced project, or a time-sliced process, to be performed by the most suitable internal or external resources. Already, in Silicon Valley, it is common to see people carrying 2-3 business cards as they match their skills and passions to their work and livelihood in a more flexible way than the elusive “permanent” full-time job.
6- Design thinking becomes the new driver of customer-centric business transformation
With the tectonic shifts in technology, demographic, and globalization, companies must transform or else. Design thinking is a good way to bring customer-centricity further into a company and ignite employees’ creativity, going beyond traditional “data driven needs analysis.” What is different this time is the intimate integration of arts and sciences. What remains the same is the sheer difficulty of translating complex user needs to products that are simple but not simplistic, and beautiful yet functional.
7- IoT: if you missed the boat on cloud, you can’t miss it on IoT too
Old guard IT vendors will have the upper hand over new Cloud leaders as they all rush to claim IoT leadership. IoT is where Big Data Analytics, Cognitive Computing, and Machine Learning come together for entirely new ways of managing business processes. In its current (emerging) condition, IoT requires a lot more vertical specialization, professional services, and “solution-selling” than cloud computing did when it was in its relative infancy. This gives traditional big (and even small) IT vendors a chance to drive and define the terms of competition, possibly controlling the choice of cloud/software-stack.
8- Security: Cloud-native, Micro-zones, and brand new strategies
Cybercrime is big business and any organization with digital assets is vulnerable to attack. As Cloud and Mobile weaken IT’s control and IoT adds many more points of vulnerability, new strategies are needed. Cloud-native security technologies will include those that redirect traffic through SaaS-based filters, Micro-Zones to permeate security throughout an app, and brand new approaches to data security.
9- Cloud computing drives further consolidation in hardware
In any value chain, a vendor must decide what value it offers and to whom. With cloud computing, the IT value chain has been disrupted. What used to be a system is now a piece of some cloud somewhere. As the real growth moves to “as-a-service” procurements, there will be fewer but bigger buyers of raw technology who drive hardware design towards scale and commoditization.
10- Composable infrastructure matures, leading to “Data Center as a System”
The computing industry was down the path of hardware partitioning when virtualization took over, and dynamic reconfiguration of hardware resources took a backseat to manipulating software containers. Infrastructure-as-code, composable infrastructure, converged infrastructure, and rack-optimized designs expand that concept. But container reconfiguration is insufficient at scale, and what is needed is hardware reconfiguration across the data center. That is the next frontier and the technologies to enable it are coming.
11- Mobile devices move towards OS-as-a-Service
Mobile devices are now sufficiently capable that new features may or may not be needed by all users and new OS revs often slow down the device. Even with free upgrades and pre-downloaded OS revs, it is hard to make customers upgrade, while power users jailbreak and get the new features on an old OS. Over time, new capabilities will be provided via more modular dynamically loaded OS services, essentially a new class of apps that are deeply integrated into the OS, to be downloaded on demand.
12- Social Media drives the Analytics Frontier
Nowhere are the demands for scale, flexibility and effectiveness for analytics greater than in social media. This is far beyond Web Analytics. The seven largest “populations” in the world are Google, China, India, Facebook, WhatsApp, WeChat and Baidu, in approximately that order, not to mention Amazon, Apple, Samsung, and several others, plus many important commercial and government applications that rely on social media datasets. Searching through such large datasets with very large numbers of images, social commentary, and complex network relationships stresses the analytical algorithms far beyond anything ever seen before. The pace of algorithmic development for data analytics and for machine intelligence will accelerate, increasingly shaped by social media requirements.
13- Technical Debt continues to accumulate, raising the cost of eventual modernization
Legacy modernization will get more attention as micro-services, data-flow, and scale-out elasticity become established. But long-term, software engineering is in dire need of the predictability and maintainability that is associated with other engineering disciplines. That need is not going away and may very well require a wholesale new methodology for programming. In the meantime, technologies that help automate software modernization, or enable modular maintainability, will gain traction.
14- Tools emerge to relieve the DB-DevOps squeeze
The technical and operational burden on developers has been growing. It is not sustainable. NoSQL databases removed the time-delay and complexity of a data schema at the expense of more complex codes, pulling developers closer to data management and persistence issues. DevOps, on the other hand, has pulled developers closer to the actual deployment and operation of apps with the associated networking, resource allocation, and quality-of-service (QoS) issues. This is another “rubber band” that cannot stretch much more. As cloud adoption continues, development, deployment, and operations will become more synchronized enabling more automation.
The idea of a “memory-only architecture” dates back several decades. New super-large memory systems are finally making it possible to hold entire data sets in memory. Combine this with Flash (and other emerging storage-class memory technologies) and you have the recipe for entirely new ways of achieving near-real-time/straight-through processing.
16- Multi-cloud will be used as a single cloud
Small and mid-size public cloud providers will form coalitions around a large market leader to offer enterprise customers the flexibility of cloud without the lock-in and the risk of having a single supplier for a given app. This opens the door for transparently running a single app across multiple public clouds at the same time.
17- Binary compatibility cracks
It’s been years since most app developers needed to know what CPU their app runs on, since they work on the higher levels of a tall software stack. Porting code sill requires time and effort but for elastic/stateless cloud apps, the work is to make sure the software stack is there and works as expected. But the emergence of large cloud providers is changing the dynamics. They have the wherewithal to port any system software to any CPU thus assuring a rich software infrastructure. And they need to differentiate and cut costs. We are already seeing GPUs in cloud offerings and FPGAs embedded in CPUs. We will also see the first examples of special cloud regions based on one or more of ARM, OpenPower, MIPS, and SPARC. Large providers can now offer a usable cloud infrastructure using any hardware that is differentiated and economically viable, free from the requirement of binary compatibility.
OrionX is a team of industry analysts, marketing executives, and demand generation experts. With a stellar reputation in Silicon Valley, OrionX is known for its trusted counsel, command of market forces, technical depth, and original content.
Long-term planning is an art. But it might take a real scientist to tackle a hundred-year planning horizon!
Frank Wilczek is a Nobel Prize winner in theoretical physics, awarded for his work in quantum chromodynamics (quarks, to you and me). He is currently the Herman Feshbach Professor of Physics at M.I.T.
Professor Wilczek was invited to speak at Brown University’s 250th anniversary last year, and was asked to make predictions about the future of physics and technology 250 years from now. Considering that a much too difficult assignment, he modified (re-normalized in physics terms) the assignment to looking forward into the next century.
We won’t look here at his predictions for advancement in physics, many of which have to do with further unification of the laws of physics, such as supersymmetry, but instead focus on long-term planning for technology and his predictions in that area. His paper “Physics in 100 Years” is available here: http://arxiv.org/pdf/1503.07735.pdf and includes both the physics and technology predictions and other speculations about the future of humanity.
Here are some of his technology predictions (quotes from the paper are shown in italics) along with our elaboration, interpretation and reflections.
Microscale
Calculation will increasingly replace experimentation in design of useful materials, catalysts, and drugs, leading to much greater efficiency and new opportunities for creativity.
Computation is key to the nanoscale revolution for developing super-strong yet highly flexible and lightweight materials that can be 3-D printed for a wide variety of applications. And rapid computation is key to developing new drugs specific to individuals’ genetic makeup (targeted gene therapies).
Calculation of many nuclear properties from fundamentals will reach < 1% accuracy, allowing much more accurate modeling of supernovae and of neutron stars. Physicists will learn to manipulate atomic nuclei dexterously, as they now manipulate atoms, enabling (for example) ultra-dense energy storage and ultra-high energy lasers.
Currently, all we are really able to do at the nuclear level is build fission reactors and fission or fusion bombs. As one example, hydrogen fusion will become economically viable during this century, liberating tremendous amounts of energy from deuterium and tritium (heavy hydrogen nuclei) extracted from water, but with lower associated risks as compared to nuclear reactors using uranium.
Mesoscale
Capable three-dimensional, fault-tolerant, self-repairing computers will be developed. In engineering those features, we will learn lessons relevant to neurobiology.
The human brain is a 3-D construct with an extremely complex network and employing high degrees of parallelism, which is key to its processing speed and thus intelligence. Computers as systems are today packaged to some extent in 3-D, but are intrinsically based around 2-dimensional CPU chips and 2-D memory chips. These chips are connected one to the other with simple networks. In the future CPUs, together with their associated memory, will be designed with 3-D architectures, allowing for much faster speeds, much higher connectivity and very much greater memory bandwidth. Quantum computing technology based on more robust qubits, rather than bits, will be well-established, allowing for tremendous speedups for certain classes of algorithms. The image below is of the CPU chip from the first line of commercially available quantum computers. One of these DWave systems is operated jointly by Google and the NASA Advanced Supercomputing Facility in Mountain View, California.
Self-assembling, self-reproducing, and autonomously creative machines will be developed. Their design will adapt both ideas and physical modules from the biological world.
We’re talking intelligent robots, here, folks! And other autonomous, intelligent, and yes, self-reproducing machines, some very tiny (able to enter the bloodstream for medical purposes), others very large (see macroscale). Artificial organs. Asteroid mining machines. Robot armies (war without human casualties?). The possibilities are endless. Asimov’s 3 laws of robotics will be enforced.
Macroscale
Bootstrap engineering projects wherein machines, starting from crude raw materials, and with minimal human supervision, build other sophisticated machines – notably including titanic computers – will be underway.
Future supercomputers will self-assemble, with limited human oversight in the assembly process. Programming will have much higher levels of machine assistance, with problem statements represented at a very high level. Machinery and vehicle production will be almost entirely automated.
A substantial fraction of the Sun’s energy impinging on Earth will be captured for human use.
The deserts will bloom with advanced solar cells and superconducting transmission lines. Humankind will have settled and begun working in the inner solar system, including the Moon,Mars, one or more major asteroids, and one or more of Jupiter’s moons (e.g. Europa, Callisto).
Imagine how much has changed in the past 100 years. No one had flown on a commercial aircraft. Very few people had telephones or automobiles or radios. And consider that the pace of discovery, knowledge acquisition and technological development is today much higher, and still accelerating. A 100 years from now, average human lifetimes should exceed a century, based on revolutionary medical advances and ever safer transportation.
We’d be interested to hear from you, what are your thoughts on the state of technology a century from now? Please comment.
Stephen Perrenod has lived and worked in Asia, the US, and Europe and possesses business experience across all major geographies in the Asia-Pacific region. He specializes in corporate strategy for market expansion, and cryptocurrency/blockchain on a deep foundation of high performance computing (HPC), cloud computing and big data. He is a prolific blogger and author of a book on cosmology.