- Resource Estimation Challenge at QRISE 2024: Recapby by Mariia Mykhailova, Principal Software Engineer on August 1, 2024 at 3:18 pm
This spring, we partnered with Quantum Coalition to offer a challenge at QRISE 2024. This six-week-long event aimed to connect students with quantum computing industry research challenges and help them get started doing research projects of their own.  The challenge we offered to the participants focused on resource estimation of quantum algorithms. Resource estimation helps us answer the question âHow many physical qubits and how much time is necessary to execute a quantum algorithm under specific assumptions about the hardware platform used?â Getting these kinds of estimates serves multiple purposes: It allows us to deduce the conditions that quantum hardware needs to meet to offer practical quantum advantage. It helps us clarify which algorithms truly give quantum advantage over their classical counterparts, and which ones do not, and if they do, what problem instances get the advantage. It allows us to compare the efficiency of different algorithms that solve the same problem long before they become viable to run on quantum machines, thus enabling work on improving quantum algorithms. The goal of the challenge was to implement a quantum algorithm of participantsâ choice and obtain and analyze the estimates of resources required for running it on future fault tolerant quantum computers using the Microsoft Azure Quantum Resource Estimator. This is exactly the kind of questions quantum algorithms researchers work on!  Letâs meet the winning teams and learn about their projects in their own words!  Team Qu-Cats Katie Harrison Muhammad Waqar Amin Nikhil Londhe Sarah Dweik Quantum approximate optimization problem (QAOA) is a quantum algorithm used to solve optimization problems. However, QAOA can only solve an optimization problem that can be formulated as a quadratic unconstrained bounded optimization (QUBO) problem. In this project, we have chosen to solve the Number Partitioning Problem (NPP) using QAOA. NPP involves partitioning a given set of numbers to determine whether it is possible to split them into two distinct partitions, where the difference between the total sum of numbers in each partition is minimum. This problem has applications in various fields, including cryptography, task scheduling, and VLSI design. This problem is also recognized for its computational difficulty, often described as the Easiest Hard Problem. In this project, we have accomplished two primary objectives. Initially, we determined the optimal QPU configuration to run QAOA. Subsequently, we conducted an analysis of resource estimates as we scaled the input size.  To determine the best setup for the quantum processing unit (QPU), we evaluated resources for eight different hardware setups, tracking variables like physical qubits, the fraction of qubits used by T-factories, and runtime, among others. The table below details results for the eight different configurations.   In addition, we conducted an analysis of resource estimates across a range of input variables. The plot below represents a segment of the analysis, primarily illustrating how the number of physical qubits varies with increasing input size.   Besides that, we have plotted other variables, such as algorithm qubits, partitions (in NPP), and T-factory qubits. We see that all variables increase as the input size increases. This is expected because from the QUBO cost function we require one bit for every element in the set. We also plotted the number of partitions that represents the scale of the problem for a particular input size. Interestingly, we notice that up to 12 elements, the number of partitions is higher than the number of physical qubits. This indicates that QAOA is at a severe disadvantage compared to the brute-force approach. However, as the number of elements continues to increase beyond 12, the growth in the number of physical qubits slows down.  Check out the teamâs project!  Team Exponential Niraj Venkat  Integer factorization is a well-studied problem in computer science that is the core hardness assumption for the widely used RSA cryptosystem. It is part of a larger framework called the hidden subgroup problem which includes the discrete logarithm, graph isomorphism and the shortest vector problem. State-of-the-art classical algorithms that exist today, such as the number field sieve, can perform factorization in subexponential time. Shorâs algorithm is a famous result that has kicked off the search for practical quantum advantage. It showed that a sufficiently large, fault-tolerant quantum computer can factor integers in polynomial time. Recently, Regev published an algorithm that provides a polynomial speedup over Shorâs, without the need for fault-tolerance. Regevâs result leverages an isomorphism between factoring and the shortest vector problem on lattices, which had remained elusive for more than two decades.  This project provides resource estimates for different variants of Regevâs quantum circuit, by comparing state preparation routines and evaluating recent optimizations to quantum modular exponentiation. In scope for future work is the classical post-processing of the samples from the quantum circuit (more below).  The initial step of Regevâs quantum circuit prepares control qubits in a Gaussian superposition state. For n qubits, this is achieved by discretizing the domain of the Gaussian (normal) probability distribution into 2n equally spaced regions and encoding those cumulative probabilities as amplitudes of the quantum state. For example, here is a visualization of successive sampling of a Gaussian state over n = 4 qubits, plotted using the Q# Histogram:   As we add more shots, the histogram gradually adopts the shape of a bell curve. Such a visual test can be useful during development, especially when running on actual quantum hardware where the quantum state is not available for introspection. This project explores three different algorithms for Gaussian state preparation: Q# library PreparePureStateD Arbitrary state preparation by Möttönen et al., similar to above where the amplitudes for each basis state are specified Grover-Rudolph state preparation which is meant specifically for probability distributions like the Gaussian, and does not require amplitudes as input In the resource estimation of the overall quantum circuit, we use the fastest method from the three listed here, namely inlinecode PreparePureStateD inlinecode, to initialize the Gaussian state.  The next step of Regevâs quantum circuit is modular exponentiation on small primes. This project implements two different algorithms: Binary exponentiation used in Regevâs original paper Fibonacci exponentiation with the Zeckendorf representation of integers, using a fast algorithm for Fibonacci number calculation Regevâs algorithm uses the quantum computer to sample a multidimensional lattice. In terms of complexity analysis, Gaussian states have properties that work well on such lattices. However, it is unclear whether a Gaussian state is actually required in practice. For this reason, our test matrix looks like this:   Quantum modular exponentiation algorithm used Control register state preparation algorithm used Fibonacci exponentiation with uniform superposition Binary exponentiation with uniform superposition Fibonacci exponentiation with Gaussian superposition Binary exponentiation with Gaussian superposition  Here are the resource estimation results for different variants of the factoring circuit for N = 143:   The overall winner is Fibonacci exponentiation with a uniform distribution over the control qubits. In this analysis, the size of the control register is fixed to 20 logical qubits for all the four profiles being tested. Preparing a uniform superposition is just a layer of Hadamard gates, which is the same for all problem sizes N. This is clearly advantageous over Gaussian state preparation, where the radius of the Gaussian state required increases exponentially with N.  This project is focused on quantum resource estimation, and for these purposes the classical post-processing of the samples from the quantum circuit is not required. However, this is required for a complete implementation of Regevâs algorithm. Current work includes investigation of lattice reduction techniques, followed by filtering of corrupted samples and fast classical multiplication in order to compute a prime factor. Other state preparation algorithms in the literature â including ones specific to Gaussian states â may also prove beneficial by reducing the gate complexity and number of samples required from the quantum circuit.  Check out the teamâs project!
- Integrated Hybrid Support in the Azure Quantum Development Kitby by Cesar Zaragoza Cortes on July 29, 2024 at 4:49 pm
Introduction Some quantum computers can do more than execute a static sequence of gates. The most advanced ones can perform mid-circuit measurements, conditionally execute gates, perform real-time classical computations and re-use qubits. If you want to experiment with these state-of-the-art capabilities, we have good news for you. The Azure Quantum Development Kit now supports running hybrid quantum programs on hardware targets. This kind of program combines classical and quantum computations; thus, we refer to them as hybrid quantum programs.  Last year, we released Azure Quantumâs Integrated Hybrid feature, enabling users to develop their hybrid quantum programs using Q# and the QDK. Since then, we have modernized the QDK, but the initial release did not have support for this feature. After months of dedicated development, we are excited to announce that the QDK again has support for implementing hybrid quantum programs!  Not only have we added support for these advanced capabilities, but we have also made significant improvements to the development experience, and users now have: More detailed and comprehensive design-time feedback. Support for a wider range of Q# features when creating hybrid quantum programs. Optimized compilation for running on quantum hardware. Increased execution reliability on programs submitted to run on quantum computers via the Azure Quantum service. Integrated Hybrid Unlocks New Possibilities Hybrid quantum computing refers to the process and architecture of a classical computer and a quantum computer working together to solve a problem. Integrated hybrid quantum computing is a specific kind of architecture that allows classical computations to be performed while qubits are coherent. This capability in combination with mid-circuit measurement enables features like branching based on measurement and real-time integer computations. These features represent a step forward in the use of high-level programming constructs in quantum applications, opening the door to a new generation of hybrid algorithms such as adaptive phase estimation, return-untilâsuccess, and some quantum error correction schemes.  At its most basic form, integrated hybrid quantum computing enables you to perform different operations based on the results from a qubit measurement. For example, the following code snippet conditionally applies an X operation to one qubit if the result of the measurement of another qubit is One:  namespace MyQuantumHybridProgram { @EntryPoint() operation Main() : Result { use qs = Qubit[2]; H(qs[0]); if MResetZ(qs[0]) == One { X(qs[1]); } return MResetZ(qs[1]); } }  Conditionally applying quantum gates based on measurement results is a feature that can be used for error correction. You can imagine how you can perform syndrome measurement and based on it apply the appropriate corrections.  You can also use other familiar Q# constructs such as loops and even integer computations that are performed while qubits are coherent. For example, the following program keeps track of how many times a measurement resulted in One and returns a Bool representing whether the count is an even number. Moreover, the program also takes advantage of another hybrid quantum computing feature, qubit re-use, which allows us to just use one qubit instead of the five that would be otherwise required. Note that all of this is automatically handled by the Q# compiler.  namespace MyQuantumHybridProgram { @EntryPoint() operation Main() : Bool { use q = Qubit(); mutable count = 0; let limit = 5; for _ in 1..limit { // Here we take advantage of an integrated // hybrid capability, qubit re-use, so we // can repeat this logic many times without // having to use a different qubit each time. H(q); if MResetZ(q) == One { set count += 1; } } return count % 2 == 0; } }  The ability to perform different computations, either classical or quantum, opens the door to the development of new innovative algorithms that are inherently hybrid.  Configuring the QDK You can run hybrid quantum programs both from Visual Studio Code and Python. In both cases, when working with a Q# program, select QIR Adaptive RI as the Q# target profile. This will enable the QDK to provide accurate design-time feedback. Diving into the details of the QIR Adaptive RI profile: QIR stands for Quantum Intermediate Representation, which is the program representation that the Q# compiler produces. Adaptive is the specific QIR profile. Profiles are defined in the QIR specification and represent a coherent subset of functionality that quantum targets support. RI stands for qubit re-use and integer computations respectively, which are extensions to the QIR Adaptive profile. Currently, Quantinuum is the only provider in Azure Quantum that supports integrated hybrid quantum computing, so make sure you submit your programs to their targets.  The Q# Compiler Guides You Once you have set up the Q# target profile, the QDK provides design-time feedback about Q# patterns that are not supported by the chosen quantum target.  Letâs look at an example of the kind of feedback the QDK provides. Consider the following code snippet:  namespace MyHybridQuantumProgram { @EntryPoint() operation Main() : Int { use q = Qubit(); H(q); let result = MResetZ(q); // We use the measurement result to determine // the value of variables of different types. // We refer to these variables and values as dynamic. // Dynamic Bool and Int values are supported by the // QIR Adaptive RI profile. let dynamicBool = result == One ? true | false; let dynamicInt = result == Zero ? 0 | 1; // Dynamic Double values are not supported by the // QIR Adaptive RI profile so the following line // will result in a compilation error. let dynamicDouble = result == Zero ? 0. | 1.; // The QIR Adaptive RI profile supports returning // dynamic values of type Result, Bool and Int. return dynamicInt; } }  In this program, we use a qubit measurement to determine the value of Bool, Int, and Double variables. Since both dynamic Bool and Int values are supported by the QIR Adaptive RI profile, the compiler does not produce any errors in the lines of code where the dynamicBool and dynamicInt variables are bound. However, since dynamic Double values are not supported by this same profile, the compiler produces an error like the following in the line of code where the dynamicDouble variable is bound:   This is just one example of how the Q# compiler provides design-time feedback to guide you on what kind of programs integrated hybrid targets can execute. The accuracy and usefulness of the feedback has significantly improved compared to the previous QDK, in which the compiler could not determine whether it was possible to execute a program on a quantum target before its submission. With the latest version of the QDK, programs execute more reliably when submitted to Azure Quantum targets.  The Q# Compiler Optimizes Your Program for Running on Quantum Hardware Another improvement that we have made is that we heavily optimize classical computations that do not need to be executed during coherence time. For example, in the following code snippet the loop limit calculation is relatively complex. Even though integer computation support makes it possible to perform this calculation while qubits are coherent, the program does not strictly require it. Since computing resources on current quantum computers are limited, the Q# compiler pre-computes anything that it can to reduce the number of computations that the quantum computer needs to perform, no matter the data type. In this program, the compiler computes the value of the limit variable, unrolls the loop and computes the value of angle for each iteration.  namespace MyHybridQuantumProgram { open Microsoft.Quantum.Convert; open Microsoft.Quantum.Math; @EntryPoint() operation Main() : Result { use q = Qubit(); let seed = 42; let limit = ((seed + 10) % 5) * (seed ^ 2); for idx in 0 .. limit { let angle = IntAsDouble(idx) * PI(); Rx(angle, q); } return MResetZ(q); } }  Give It a Try If you want to experiment with the most advanced capabilities quantum devices currently offer, install the Azure Quantum Development KitâŻVS Code Extension or install the qsharp Python package, and start implementing your own quantum hybrid programs. You can get inspiration to develop your own hybrid quantum algorithms from our samples and experiments. We are excited to see what you can accomplish!
- Evaluating cat qubits for fault-tolerant quantum computing using Resource Estimatorby by Mathias Soeken and Elie Gouzien on June 19, 2024 at 8:56 pm
Introduction This blog post highlights a recent collaboration between Microsoft and Alice & Bob, a French startup whose goal is to build a fault-tolerant quantum computer by leveraging a superconducting qubit called a cat qubit. In this collaboration, Alice & Bob uses the new extensibility mechanisms of Microsoftâs Resource Estimator to obtain resource estimates for their cat qubit architecture.  The Resource Estimator is a tool that can help evaluate the practical benefit of quantum algorithms. It calculates an estimate for the expected runtime and the number of physical qubits needed to run a given program under different settings of the target fault-tolerant quantum computer. The default settings of the resource estimator represent generic gate-based and Majorana-based qubits, unbiased planar quantum error correction codes (i.e., 2D layout for logical qubits assuming the same error rates for bit flip and phase flip errors) that support lattice surgery, and T factories that use multiple rounds of distillation (please refer to this paper for more details on these assumptions). These settings cover many quantum computing architectures, but they do not have complete flexibility for quantum architects to model various other important system architectures with different assumptions.  Microsoft is happy to announce that the Resource Estimator, which was made open source in January 2024, now has an extensibility API to model any quantum architecture and to modify all assumptions. To show how this extensibility API works, Microsoft and Alice & Bob demonstrate how it is used to model Alice & Bobâs cat qubit architecture, along with a biased repetition code, and Toffoli factories. The open-source example performs the resource estimation for elliptic curve cryptography described in Alice & Bobâs Physical Review Letters paper from July 2023.  Architecture Cat qubits have special error correction requirements because they exhibit a biased noise: they have several orders of magnitude less bit flips than phase flips. They use engineered two photon dissipation to stabilize two coherent states of the same amplitude and opposite phase, used as the 0 and 1 of the qubits. The Alice & Bob roadmap takes advantage of this asymmetry to simplify the error correction strategy. To achieve this however, the usual hierarchy of gates used in quantum computing has to be modified. As a first step, we need to build a gate set that protects this noise-biasing property. And then, from this set, they have to offer a universal set of fault-tolerant operations (note that the bias-preserving gate set is typically not universal, but sufficient to implement a universal gate set at the logical level). This work is carried in the article Repetition Cat Qubits for Fault-Tolerant Quantum Computation and summarized in the figure below.   Alice & Bobâs architecture highlights the importance of extensibility in the Resource Estimator and the ability to override the pre-defined settings. The typical error correction code, used by the Resource Estimator, is the surface code, but cat qubits require a repetition code. The Resource Estimator assumes a âClifford+Tâ universal gate set, while the gate set presented above for cat qubits is âClifford+Toffoli.â  Implementation details  The resource estimator, which is written in Rust, can be extended by using a Rust API. The main function of the resource estimator is to calculate the physical resource estimates for a logical overhead with respect to an error correction protocol, a physical qubit, and a factory builder. The interaction of these components is illustrated in the architecture diagram above. Each of these components are interfaces that can be implemented, which allows full flexibility. For instance, the resource estimator doesnât have to know about the input program, or even the layout method. It only needs the logical overhead, which gives the number of logical qubits, the logical depth, and the number of needed magic states. Likewise, the implementations of the other interfaces provide information for the resource estimation. We will explain some aspects of the implementation in the remainder of this section but please refer to the example source code in GitHub for more details.  The error correction protocol in the Resource Estimator defines both the physical qubit and the code parameter that it uses. For most codes, the code parameter is the code distance, and finding a value for the code distance that ensures a desired logical error rate given a physical qubit is one of the main goals of the error correction protocol. The Alice & Bob architecture uses a repetition code with two parameters: distance and the average number of photons. The distance deals with the phase flip error and the number of photons must be high enough to avoid bit flip errors, so that the repetition code can focus on correcting only the phase flip errors.  A factory builderâs job is to make magic state factories that produce magic states with a certain maximum output error probability. The factories can be either pre-computed or they can be calculated as needed, when a new request is made. Also, they can use the error correction protocol and select their own code parameters to make the factories. For Alice & Bobâs architecture, the magic state that is produced is CCX and thereâs a pre-computed list of Toffoli factories available (see also Table 3 in the paper).  We make two main assumptions about the input program: that it uses mostly CX (or CNOT) and CCX (or Toffoli) gates, and that they arenât run in parallel, but each have their own cycle time (i.e., the number of needed error correction syndrome extraction cycles). With these assumptions, and the number of logical algorithm qubits before taking into account the layout, we can easily calculate the layout overhead as a function of the number of logical qubits and the number of CX and CCX gates. Moreover, the paper from Alice & Bob gives formulas to find values for these three metrics for the elliptic curve cryptography algorithm, and so the layout overhead can be generated as a function of the key size and some implementation details (such as the window size for windowed arithmetic). Moreover, we use the Azure Quantum Development Kit (QDK) to compute a logical overhead by evaluating a Q# program.    The above graph compares the space-time trade-off of resource estimates using the resource estimator and the estimates from the paper. The paper reported a quicker solution that needed more qubits, while the resource estimator produced estimates with fewer qubits and a longer, but feasible, runtime. Note that the resource estimator does not automatically explore application specific parameters (such as window sizes for windowed arithmetic).  Next steps You can try out and execute the Alice & Bob resource estimation example that uses Microsoftâs Resource Estimator. As it is open source, you can easily change the application input. The cost model that relies on CX and CCX gates is compatible with many logical resource estimation research papers in the literature, and therefore results from those papers can be quickly converted into physical resource estimates. Further, you can examine various Q# programs that are available in the Q# GitHub repository. We hope that the resource estimator gives you useful insights and helps your research; and we would welcome your feedback.
- Circuit Diagrams with Q#by by Mine Starks on May 20, 2024 at 9:32 pm
Introduction If youâve been exploring quantum programming using Q#, you may have been thinking, âThis language is great and so easy to use! But what about visual learners?â Â Iâm a software engineer in the for Azure Quantum Development Kit team, and Iâm very excited to share a new feature Iâve been working on: circuit visualization in Q#. Â One of the neat things about Q# is that it gives you the ability to express quantum algorithms in a procedural language thatâs reminiscent of classical programming languages such as C and Python. If youâre already a programmer, this way of thinking will be very intuitive to you, and you can get started with quantum computing right away (if you havenât done so yet, check out our quantum katas). Â However, this isnât how many people learn about quantum computing today. If you flip through any quantum computing textbook, youâll see that itâs conventional to think in terms of quantum circuits. Â We wanted to bridge the gap between these two different modes of thinking. Â Getting Started If you open any Q# program in VS Code, youâll notice a little âCircuitâ CodeLens above the entry point declaration. When you click on that, your Q# program will be represented as a quantum circuit diagram. Â Â Being able to go from Q# code to circuit diagrams means that you can use familiar constructs such as inlinecode for inlinecode loops and inlinecodeif inlinecode statements in your program to manipulate the quantum state while being able to view the logical circuit at any time to get a high-level view of your algorithm. Â Â How does this work? The quantum circuit for a Q# program is generated by executing all the classical parts of the program while keeping track of when qubits are allocated and which quantum gates are applied. This data is then displayed as a quantum circuit. Â Not all quantum programs can be represented as straightforward quantum circuits. What if we have a dynamic (commonly known as âadaptiveâ) circuit? Say we have a inlinecodewhile inlinecode loop in our program that compares measurement results and takes an action that depends on the result. The exact set of gates in the program will not be deterministic anymore. Â Thatâs when we need to run the program through the quantum simulator. This is called âtraceâ mode since weâre tracing the quantum operations as they are actually performed in the simulator. When the circuit visualizer detects that the program contains measurement comparisons, this mode is activated. Â Â Depending on your luck, you may end up with two gates, or you may end up with many more! Â Â Each time you generate the circuit, you may see a different outcome in the circuit diagram. Â It would certainly be nice to visualize all the outcomes at once, and weâre working through some ideas on how to do that. Simple conditionals can be represented as gates controlled by classical wires. But given a language as expressive as Q#, you can write complex conditionals that are difficult to visualize on a single 2-D circuit diagram. How would you represent an adaptive circuit such as the one above? Weâd love to hear your ideas. You can leave a comment here or on this GitHub issue. Â Reflections Working on this feature sparked a lot of lively debate within the team, especially during the design stage. Weâre a team with diverse technical backgrounds. Some of us found it very intuitive to think in terms of circuits. Others preferred reading code and thought circuit diagrams were very limiting. Did we even need the feature at all? Â I now realize itâs not either-or: itâs very powerful to be able to do both. Even if you prefer one paradigm over the other, being able to inspect your code through different lenses really deepens your understanding of the problem youâre working on. You can run simulations and look at a histogram of the results. You can step through the code using the Q# debugger. And now you can view it as a circuit diagram. Each different view into the problem offers a different insight. Â This is also why testing this feature was so fun for me. Iâm far from an expert in quantum computing; some of our Q# samples are admittedly still confusing to me. As I ran the circuit visualizer on each sample, giving it a final look-over, I found the process unexpectedly satisfying. I felt like I was finally starting to understand what these algorithms were doing. Iâm happy for this new addition to my learning toolkit! Â Your own quantum circuit diagrams If youâd like to try out Q# circuit diagrams for yourself, head over to The Azure Quantum Playground and give it a try now â no installation necessary. When youâre ready to work on your own Q# projects, install the Azure Quantum Development Kit VS Code Extension. If you prefer working in Python, head over to the documentation for instructions on how to get started in Jupyter Notebooks. Let us know what you think!
- Design Fault Tolerant Quantum Computing applications with the Resource Estimatorby by Fabrice Frachon, Mathias Soeken, and Ivan Basov on January 16, 2024 at 8:46 pm
Introduction We are excited to announce that following its initial release the Azure Quantum Resource Estimator is now open-source. It has been integrated with the Modern QDK making it run up to 100x faster, and running across PC, Mac, Linux or from your web browser. Try it now.  Why is resource estimation relevant today? Quantum computing has the potential for widespread societal and scientific impact, and many applications have been proposed for quantum computers. The quantum community has reached a consensus that NISQ machines do not offer practical quantum advantage and that it is time to graduate to the next of the three implementation levels.  Unlike computing with transistors, basic operations with qubits are much more complicated and an order of magnitude slower. We now understand that practical quantum advantage will be achieved for small-data problems that offer super-polynomial quantum speedup (see T. Hoefler et al, CACM 66, 82-87). This includes, specifically, the simulation of quantum systems in quantum physics, chemistry, and materials science.  But at a basic level, there are still many remaining open questions: What are the most promising and useful quantum algorithms on which to build useful quantum applications? Which quantum computing architectures and qubit technologies can reach the necessary scale to run such quantum accelerated applications? Which qubit technologies are well suited to practical quantum supercomputers? Which quantum computing technologies are unlikely to achieve the necessary scale?  Thatâs why we need the Resource Estimator to help us answer these questions and guide todayâs research and development toward logical qubit applications.  Resource Estimator: an open-source tool for every level of the quantum stack Achieving practical quantum advantage will require improvements and domain expertise at every level of the quantum computing stack. A unified open-source tool to benchmark solutions and collaborate across disciplines will speed up our path toward a quantum supercomputer: this is the premise of Azure Quantum Resource Estimator.   Whether you are developing applications, researching algorithms, designing language compilers and optimizers, creating new error correction codes, or working on R&D for faster, smaller and more reliable qubits, the Resource Estimator helps you assess how your theoretical or empirical enhancements can improve the whole stack.  As an individual researcher, you can leverage prebuilt options to focus on your area. If you are part of a team, you can work collectively at every level of the stack and see the results of your combined efforts.  Easy to start and fully customizable The Resource Estimator is an estimation platform that lets you start with minimal inputs, abstracting the many specificities of quantum systems. If you require more control, you can adjust and explore a vast number of system characteristics.  The Resource Estimator can quickly explore thousands of possible solutions. This accelerates the development lifecycle and lets you easily review trade-offs between computation time and number of physical qubits.  The table below summarizes some of the ways you can adapt the Resource Estimator to your needs, allowing you to specify both the description of the quantum system and to control the exploration of estimates. Explore all available parameters.  Describe your system Explore and control estimates Qubit params Instruction set Qubit measurement time Gate times Error rates ⊠more parameters QEC Schema Error correction threshold Logical cycle time Physical qubits per logical qubit ⊠more parameters Distillation units Number of T-States required for the distillation process Number of T-states produced as output from the distillation process Probability of failure of the distillation process ⊠more parameters Explore trade-offs Set constraints and explore trade-offs between runtime and number of qubits. Set an error budget Use known estimates for an operation Compute and cache costs of sub-routines Batch multiple estimates Chemistry estimates* Estimate the physical resources required to calculate the energy of a Hamiltonian. Use your preferred SDK Q#, Qiskit* or optimized QIR* from your existing compiler chain. *Currently requires an Azure Subscription  Get started If you are ready to get started, you can choose from: No-code exploration of quantum computing resource estimates for cryptography. No-install Q# estimates with VS Code for the Web: Open the Modern QDK playground. Select one of the Resource Estimation samples such as Shor or quantum dynamics. Open the command palette (press CTRL+Shift+P on PC or CMD+Shift+P on Mac) and select Q#: Calculate Resource Estimates. Set parameters. In a few seconds, see physical qubits, algorithm runtime, and dozens of breakdown parameters. Select a different algorithm for comparison One click install with VS Code, full customization options with Python + Q#, and support for Qiskit with an Azure subscription. .  Read more from the documentation.  Participate in the open source project To join the discussion or contribute to the development of the Resource Estimator, visit https://aka.ms/AQ/RE/OpenSource.  Coming soon 2024-01-29 update: This feature is now available. Learn more from the Pareto frontier documentation.  Understanding the trade-off between runtime and system scale is one of the more important aspects of resource estimation. To help you better understand and visualize the tradeoffs, the Resource Estimator will soon provide fully automated exploration and graphics, such as the one below:   Make sure to subscribe to the Q# blog to be notified of this featureâs availability.
- Announcing v1.0 of the Azure Quantum Development Kitby by Bill Ticehurst, Principal Quantum Software Engineering Manager on January 12, 2024 at 11:07 pm
Introduction Today we are excited to announce the 1.0 release of the Azure Quantum Development Kit, which we often refer to simply as âthe QDKâ. Â As outlined in an earlier blog post, this is a significant re-write over the prior QDK with an emphasis on speed, simplicity, and a delightful experience. Review that post for the technical details on how we rebuilt it, but at a product level the re-write has enabled us to make some incredible improvements that exceeded the expectations we set out with, some highlights being: A full-featured browser experience Vastly simplified installation across platforms Up to 100x performance improvement Comprehensive code editing productivity features A debugger to gain new insights as you develop and fix your quantum programs Powerful resource estimation capabilities directly in the editor Integration with the Azure Quantum service And much more! This post will include lots of video clips to try and highlight some of these experiences (all videos were recorded in real time). Â Up and running in a flash For the fastest getting started experience, just go to https://vscode.dev/quantum/playground/. The QDK extension for VS Code works fully in VS Code for the Web, and this URL loads an instance of VS Code in the browser with the QDK extension preinstalled, along with a virtual file system pre-loaded with some common quantum algorithms. You can experiment here, then simply close the browser tab when done, without installing anything or accessing any files on your local machine. Â If using VS Code on your local machine (or using https://vscode.dev directly), then installing the extension is a snap. Simply go to the VS Code Extension Marketplace, search for âQDKâ, and install the âAzure Quantum Development Kitâ extension published by âMicrosoft DevLabsâ (direct link). The extension is lightweight with no dependencies and will install in seconds, as shown below. Â
- Defining logical qubits: Criteria for Resilient Quantum Computationby by Dr. Krysta M. Svore, Distinguished Engineer on November 29, 2023 at 10:21 pm
Introduction As an industry, we are all collectively committed to bringing scaled quantum computing to fruition. Understanding what it will take to reach this goal is crucial not just for measuring industry progress, but also for developing a robust strategy to build a quantum machine and a quantum-ready community. Thatâs why in June 2023, we offered how quantum computing must graduate through three implementation levels to achieve utility scale: Level 1 Foundational, Level 2 Resilient, Level 3 Scale. All quantum computing technologies today are at Level 1, and while numerous NISQ machines have been developed, they do not offer practical quantum advantage. True utility will only come from orchestrating resilient quantum computation across a sea of logical qubits â something that, to the best of our knowledge, can only be achieved through fault tolerance and error correction. And it has not yet been demonstrated. Â The next step toward practical quantum advantage, and Level 3 Scale, is to demonstrate resilient quantum computation on a logical qubit. Resilience in this context means the ability to show that quantum error correction helpsârather than hindersânon-trivial quantum computation. However, an important element of this non-triviality is the interaction between logical qubits and the entanglement it generates, which means resilience of just one logical qubit will not be enough. Therefore, demonstrating two logical qubits performing an error-corrected computation that outperforms the same computation on physical qubits will mark the first demonstration of a resilient quantum computation in our fieldâs history. Â Before our industry can declare victory on reaching Level 2 Resilient Quantum Computing, by performing such a demonstration on a given quantum computing hardware, itâs important to agree on what this entails, and the path from there to Level 3 Scale. Â Â Defining a logical qubit The most meaningful definition of a logical qubit hinges on what one can do with that qubit â demonstrating a qubit that can only remain idle, that is, be preserved in memory, is not as meaningful as demonstrating a non-trivial operation. Therefore, we define a logical qubit such that it initially allows some non-trivial, encoded computation to be performed on it. Â A significant challenge in formally defining a logical qubit is accounting for distinct hardware; for example, the definition should not favor one hardware over another. To address this, we propose a set of criteria that marks the entrance into the resilient level of quantum computation. In other words, these are the criteria for calling something a âlogical qubitâ. Â Entrance criteria to Level 2 Graduating to Level 2 resilient quantum computing is achieved when fewer errors are observed on the output of a logical, error-corrected quantum circuit than on the analogous physical circuit without error correction.[1] We also require that a resilient level demonstration include some uniquely âquantumâ feature. Otherwise, the demonstration reduces to a simply novel demonstration of probabilistic bits. Â Arguably the most natural âquantumâ feature to demonstrate in this regard is entanglement. A demonstration of the resilient level of quantum computation should then satisfy the following criteria: demonstrates a convincingly large separation between the logical error rate and the physical error rate of a non-trivial logical circuit and its physical counterpart, respectively corrects at least all individual circuit faults generates entanglement between at least two logical qubits. Upon satisfaction of these criteria, the term âlogical qubitâ can then be used to refer to the encoded qubits involved. Â The distinction between the Resilient and Scale levels is worth emphasizing â a proof of principle demonstration of resiliency must be convincing, but it does not require a fully scaled machine. For this reason, a resilient level demonstration may use certain forms of post-selection. Post-selection here means the ability to accept only those runs that satisfy specific criteria. Importantly, the chosen post-selection method must not replace error-correction altogether, as error-correction is central to the type of resiliency that Level 2 aims to demonstrate. Â Measuring progress across Level 2 Once entrance to the Resilient Level is achieved, as an industry we need to be able to measure continued progress toward Level 3. Not every type of quantum computing hardware will achieve Level 3 Scale; the requirements to reach practical quantum advantage at Level 3 include achieving upwards of 1000 logical qubits operating at a mega-rQOPS with logical error rates better than 10-12. And so it is critical to be able to understand advancements within Level 2 toward these requirements. Â Inspired in part by DiVincenzoâs criteria, we propose to measure progress along four axes: universality, scalability, fidelity, composability. For each axis we offer the following ideas on how to measure it, with hopes the community will build on them: Universality: A universal quantum computer requires both Clifford and non-Clifford operations. Is there a set of high-fidelity Clifford-complete logical operations? Is there a set of high-fidelity universal logical operations? A typical strategy employed is to design the former, which can then be used in conjunction with a noisy non-Clifford state to realize a universal set of logical operations. Of course, different hardware and approaches to fault-tolerance may employ different strategies. Scalability: At its core, resource requirements for advantage must be reasonable (i.e., a very small fraction of the Earthâs resources or a personâs lifetime). More technically, does the resource overhead required scale polynomially with target logical error rate of any quantum algorithm? Note that some hardware may achieve very high fidelity but may have limited numbers of physical qubits, so that improving the error correction codes in the most obvious way (increasing code distance) may be difficult. Fidelity: Logical error rates of all operations must improve with code strength. More strictly, is the logical error rate better than the physical error rate, i.e., are each of the operation fidelities âsub-pseudothresholdâ? Progress on this axis can be measured with Quantum Characterization Verification & Validation (QCVV) performed at the logical level, or by engaging in operational tasks such as Bell inequality violations and self-testing protocols. Composability: Are the fault-tolerant gadgets for all logical operations composable? It is not sufficient to demonstrate operations separately, rather it is crucial to demonstrate their composition into richer circuits and eventually more powerful algorithms. More crucially, the performance of the circuits must be bounded by the performance of the components in the expected way. Metrics along this line will enable us to check what logical circuits can be run, and with what expected fidelity. Criteria to advance from Level 2 to Level 3 Scale The exit of the resilient level of logical computation will be marked by large depth, high fidelity computations involving upwards of hundreds of logical qubits. For example, a logical, fault-tolerant computation on ~100 logical qubits or more with a universal set of composable logical operations with an error rate of ~10-8 or better will be necessary. At Level 3, performance of a quantum supercomputer can then be measured by reliable quantum operations per second (rQOPS). Ultimately, a quantum supercomputer will be achieved once the machine is able to demonstrate 1000 logical qubits operating at a mega-rQOPS with logical error rate of 10-12 or better. Â Conclusion Itâs no doubt an exciting time to be in quantum computing. Our industry is at the brink of reaching the next implementation level, Level 2, which puts our industry on path to ultimately achieving practical quantum advantage. Together as a community we have an opportunity to help measure progress across Level 2, and to introduce benchmarks for the industry. If you have ideas or feedback on criteria to enter Level 2, or how to measure progress, weâd love to hear from you. Â [1] Our criteria build on and complement criteria of both DiVincenzo (DiVincenzo, David P. (2000-04-13). âThe Physical Implementation of Quantum Computationâ. Fortschritte der Physik. 48 (9â11): 771â783) and Gottesman (Gottesman, Daniel. (2016-10). âQuantum fault tolerance in small experimentsâ. https://arxiv.org/abs/1610.03507), who have previously outlined important criteria for achieving quantum computing and its fault tolerance.
- Calculating resource estimates for cryptanalysisby by Mathias Soeken, Senior Software Engineer on November 1, 2023 at 9:36 pm
The code for evaluating the data is the same and returns the following table:  – Physical qubits Physical runtime Gate-based (reasonable) 25.17M 1 days Gate-based (optimistic) 5.83M 12 hours Majorana (reasonable) 13.40M 9 hours Majorana (optimistic) 4.18M 5 hours  We can use the same program to compute resource estimates for other RSA integers, including the RSA challenge numbers RSA-3072 and RSA-4096, whose estimates are part of the cryptography experience on the Azure Quantum website.  Advanced Encryption Standard (AES) The Advanced Encryption Standard (AES) is a symmetric-key algorithm and a standard for the US federal government. In order to obtain the physical resource estimates for breaking AES, we started from the logical estimates in Implementing Grover oracles for quantum key search on AES and LowMC (arXiv:1910.01700, Table 8), with updates on the qubit counts suggested in Quantum Analysis of AES (Cryptology ePrint Archive, Paper 2022/683, Table 7). In principle, we can follow the approach using the AccountForEstimates function as we did for ECC. This operation and the logical counts in the Azure Quantum Resource Estimator are represented using 64-bit integers for performance reasons, however, for the AES estimates we need 256-bit integers. As a result we used an internal non-production version of the resource estimator that can handle this precision. Further details can be made available to researchers if you run into similar precision issues in your resource estimation projects.  Learn more The Azure Quantum Resource Estimator can be applied to estimate any quantum algorithm, not only cryptanalysis. Learn how to get started in Microsoft Quantum today with the Azure Quantum documentation. In there you find how to explore all the rich capabilities in various notebooks, with applications in quantum chemistry, quantum simulation, and arithmetic. You can learn how to submit your own quantum programs written in Q#, Qiskit, or directly provided as QIR, as well as how to set up advanced resource estimation experiments and apply customizations such as space/time tradeoffs.
- Azure Quantum Integrated Hybrid unlocks algorithmic primitivesby by Martin Roetteler, Director of Quantum Applications on October 19, 2023 at 3:49 pm
To build a quantum supercomputer that can solve the worldâs hardest and most complex problems in chemistry and materials science, several key ingredients need to come together. First, todayâs foundational-level quantum machines need to be scaled up to a size of at least one million stable and controllable qubits. These are the table stakes for solving any interesting, useful algorithmic problem better or faster than a classical computer, based on what we know from profiling quantum programs using the Azure Quantum resource estimator.  Second, they need to be kept stable which means that error correction will be needed to combat the fundamental noise processes that disrupt the quantum computer. To create such stability basically means to forge the underlying noisy physical qubits into more stable logical qubits and to use fault-tolerant methods to implement operations. Microsoftâs unique topological qubit design has stability built in at the hardware level, and in turn will require less overhead to realize logical, fault-tolerant computation with a quantum error correcting code. No matter the underlying qubit design, advanced classical computational power will be required to keep a quantum machine stable, along with the underlying quantum error correcting code.  Finally, a quantum supercomputer will necessarily be hybrid, both in its implementation but also in the solutions it runs. After all, all quantum algorithms require a combination of both quantum and classical compute to produce a solution. And it is in this careful design of the classical and quantum compute, together, where we will see future innovation and new types of solutions emerging. Hybrid quantum computing enables the seamless integration of quantum and classical compute together. This is an important part for achieving our path to quantum at scale and to integrate our quantum machine alongside supercomputing classical machines in the cloud.  Implementing hybrid quantum algorithms  Integrated Hybrid in Microsoft Quantum allows to mix classical and quantum code together already today. âThis opens the door to a new generation of hybrid algorithms that can benefit from complex side-computations that happen while the quantum state of the processor stays coherentâ, Natalie Brown, Senior Advanced Physicist at Quantinuum. Magic State Distillation (MSD) protocols are quantum error-correction methods to create special quantum states that have preferrable properties for implementing universal, programmable quantum computers. Like in classical distillation which is the process of separating components of a liquid mixture by boiling and condensation, in quantum distillation the idea is to separate the noisy part of a quantum state and retain the good part. The hybrid nature of the quantum protocol manifests itself in a repeat loop that keeps running until a certain measurement result is seen. Afterward, the system is found in the distilled state, and the computation continues with the resource state. A visualization of the protocol is shown here:   The number of repetitions of the loop of the middle block depends on the measurement outcome and cannot be determined in advance, i.e., this program cannot be implemented as a static quantum circuit. Once the measurements of the 4 lower qubits indicate the result â0000â, the top most qubit is passed on as the output of the computation. In case any other syndrome is measured, the 5 qubits are reset and the procedure starts over. Repeat-Until-Success (RUS) protocols are algorithmic primitives that allow to implement complex quantum instructions that are not found in the basic instruction set with high accuracy. RUS protocols allow to implement a complex target operation that in a way can be best described as âspeculativeâ: a simple operation is applied to a system that is larger than the original system on which the input state lives. Part of this large system is measured, and the quantum state is partially collapsed. If the measurement yields the correct result, the target operation is implemented perfectly. If the result is wrong, the input state does not undergo any damage, allowing the procedure to be tried again. Shown below is a visualization of a particular RUS protocol that implements a two-stage RUS circuit with two ancilla qubits to implement the unitary V3 = (I + 2iZ)/ â 5 , which is a rotation around an irrational angle.  Similar to MSD, RUS circuits require real-time feedback and control, as the success of the circuit depends on the measurement outcomes. Moreover, RUS circuits allow trade-offs between circuit representations, such as using recursion or loops, which may affect the performance and scalability of the circuit. To explore these trade-offs, we need a flexible representation that can express different versions of RUS circuits and compare their results.  What these two quantum algorithms both have in common is that they require complex control flow, including measurements that are applied during the computation while some part of the quantum computer remains coherent.  Experimental results  Recently, as shared in a paper posted on arxiv.org, a team of researchers from Microsoft and Quantinuum developed and ran MSD and RUS algorithms on the H1-Series in Azure Quantum.  The programs for the applications were written in Q# and were then compiled to the Quantum Intermediate Representation (QIR), which is based on LLVM, a representation widely used in classical compilers. QIR allows to represent quantum and classical logic using function declarations, basic blocks, and control flow instructions. QIR also enables us to use existing LLVM tools and techniques to analyze and optimize the program logic (eliminating unnecessary instructions and reducing transport steps), such as constant folding, loop unrolling, and dead code elimination.  Quantinuumâs H1-Series quantum computer leverages QIR in a powerful way: the Quantinuum quantum computer allows for hybrid classical/quantum programs to be executed. On the classical side, rich control flow is supported through integration with QIR including: Nested conditional statements, Mid-circuit measurements and branching depending on results, Re-use of already measured qubits, Classical function calls. These primitive building blocks can be used to orchestrate computations such as MSD and RUS.  MSD protocol based on the [[5,1,3]] quantum error-correcting code  The left side of the following figure shows the expectation values for the actual run on Quantinuum H1-1 system, as well as the results of a simulation run of the H1-1 emulator (denoted as H1-1E). We plot the expectations with respect to three different Pauli frames X, Y, and Z which completely characterize the state of the qubits. The boxes indicate the ideal result which is only achievable for gates that are completely noiseless. The right side of the figure shows the probability of distillation succeeding at different limits, running both on the H1-1 system and the H1-1E emulator. The dashed black line indicates the probability of success expected for a perfect state preparation on a noiseless device.        Two-stage RUS circuit  Researchers demonstrated the viability of this RUS protocol using QIR on Quantinuumâs QCCD simulator, which models realistic noise and errors in trapped ion systems, and by running it on the actual device. QIR was used to express four different versions of the RUS circuit, each using a different combination of recursion or loops, and Q# or OpenQASM as the source language.   As shown in the figure on the left above, the RUS protocol shows best performance when the Q# to QIR compiler is used and applied to a Q# implementation that realizes the RUS protocol as a for loop. As the iteration limit is increased, there is a clear drop in the performance for the recursion implementations, while the performance of loop implementations closely tracks the hand optimized OpenQASM 2.0++ code which is only achievable for gates that are completely noiseless.  A full Q# code sample that runs in Microsoft Quantum and that implements this hybrid program can be found at https://aka.ms/AQ/Samples/RUS.  Conclusion In this blog post, we have shown how Q# can be used to implement and optimize fault-tolerant protocols that use a hybrid approach of quantum and classical logic. We have presented two examples of such protocols, MSD and RUS circuits, and demonstrated their execution and performance through Microsoft Quantum on Quantinuumâs H1 series system that runs on an ion trap quantum charge-coupled device architecture platform. We have also shown how QIR can leverage the LLVM toolchain to enable interoperability and portability across different quantum hardware platforms.  Learn more and get started with Microsoft Quantum today Whether youâre starting your own learning journey, exploring quantum hardware, or developing quantum algorithms for the future, Microsoft Quantum offers a platform for your quantum exploration and innovation. For enterprises interested in accelerating scientific discovery today, you can learn more about the recently announced Azure Quantum Elements, Microsoftâs system for computational chemistry and materials science combining the latest breakthroughs in HPC, AI, and quantum computing. At Microsoft, we are architecting a public cloud with Azure that enables scaled quantum computing to become a reality and then seamlessly delivers the profound benefits of it to our customers. Learn more about how Microsoft is harnessing the power of the cloud to make the promise of quantum at scale a reality and join our Microsoft Quantum Innovator Series with distinguished speakers that provide unique insights into Microsoftâs quantum computing effort.  #quantumcomputing #quantumcloud #azurequantum #quantinuum #QIR
- Introducing the Azure Quantum Development Kit Previewby by Bill Ticehurst, Principal Quantum Software Engineering Manager on September 19, 2023 at 4:17 pm
Introduction 100x faster, 100x smaller, and it runs in the browser! The Microsoft Quantum team is excited to announce the initial preview of the new Azure Quantum Development Kit (or QDK for short). This has been entirely rebuilt using a new codebase on a new technology stack, and this blog post outlines the why, the how, and some of the benefits of doing so.  The âtl;drâ is that we rewrote it (mostly) in Rust which compiles to WebAssembly for VS Code or the web, and to native binaries for Python. Itâs over 100x smaller, over 100x faster, much easier to install & use, works fully in the browser, and is much more productive & fun for the team to work on.  Give it a try via the instructions at https://github.com/microsoft/qsharp/wiki/Installation, and read on for the details⊠ Our goals The existing Quantum Development Kit has grown organically over several years, first shipping in late 2017. Being in a fast-evolving space, it naturally evolved quickly too, incorporating many features and technologies along the way.  As we reflected on what weâd like the QDK to be going forward, it was clear some of the technologies and features would be a challenge to bring along, and that a re-write might be the best solution. Some of our goals were:  A simplified user experience Many quantum developers donât come from a .NET background, being mostly familiar with Python. However, the existing QDK exposes much of the .NET ecosystem to developers, providing an additional learning curve. Some examples being the MSBuild-based project & build system and NuGet package management. When working with customers on issues, they will sometimes be confused when needing to edit .csproj files, run commands such as âdotnet cleanâ, or troubleshoot NuGet packages for their Q# projects.  Providing a delightful & simplified experience, from installation to learning to coding to troubleshooting to submitting jobs to quantum computers is our primary goal.  Platform support The existing QDK has some code and dependencies that are platform specific. While these were not problems initially, as platforms have evolved this has caused challenges. For example, Apple Silicon and Windows on ARM64 are not fully supported in the existing QDK. We also wanted the tools to run in the browser, such as in our new https://quantum.microsoft.com portal, or in a https://vscode.dev hosted editor.  Performance & reliability With the runtime dependencies in the existing QDK, the full set of binaries that need to be installed has grown quite large. Besides the .NET runtime itself, there are some F# library dependencies in the parser, some C++ multi-threading library dependencies in the simulator, some NuGet dependencies for the Q# project SDK, etc. In total, this can add up to over 180MB when installed locally after building a simple Q# project. Coordinating the download and initialization of the binaries, as well as the complexity of the interactions between them, can often lead to performance & reliability issues.  Engineering velocity As the existing QDK had come to span multiple repositories, multiple build pipelines, multiple languages & runtimes (each often with their own set of dependencies), and multiple distribution channels, the speed at which we could check in a feature or produce a release has slowed, and a great deal of time is spent on codebase maintenance, security updates, and troubleshooting build issues. To provide a productive (and enjoyable) engineering system going forward, dramatic simplification was needed.  The solution Around the end of 2022 we set about prototyping some ideas, which grew into the new QDK we are releasing in preview today. The basic philosophy behind engineering the new QDK is as follows:  Write mostly in Rust By writing as much as possible in Rust, we have a codebase that can easily target native binaries for any platform supported by the Rust compiler (which we build into our Python wheels) and build for WebAssembly (via wasm-bindgen) to run in the browser. With a focused codebase, the resulting binaries are very small & fast too.  Keep technologies and dependencies to a minimum There is a cost to every dependency you take. The cost to learn it, the cost to install it (i.e., build times and disk space), the cost to update & maintain it (i.e., as security issues are reported), the cost to final product size, and so on. Sometimes these costs are worth paying for what you get in return, but the taxes accumulate over time. We are very mindful and minimal in the dependencies we take.  For our new codebase, we have limited the languages used to: Rust for the core of the product. This has the âbatteries includedâ benefit of cargo to manage dependencies, builds, testing, etc. Python, as we build & ship packages to PyPI as part of the QDK and use Python for scripting tasks in the repo where practical. JavaScript (including TypeScript), as we build a VS Code extension and write some web integration code. For those three languages, we keep dependencies to a minimum, nearly all of which can be seen in the Cargo.toml and package.json files at the root of the repo.  The below high-level diagram shows roughly how this all fits together in our VS Code extension, Python packages, and for general web site integration.  Simple & fast engineering Setting up a build environment for developers (or CI agents) should be fast. For the new codebase, currently you just install Rust, Python, and Node.js, clone one repo, and run one Python build script.  Developing the product should be fast. When working on the core compiler Rust code, the development inner-loop is often as fast as clicking ârunâ on a unit test in VS Code via the excellent ârust-analyzerâ extension. When working on the TypeScript code for the VS Code extension, with âesbuildâ running in watch-mode itâs as quick as saving the changes and pressing F5 to launch the Extension Development Host.  The build infrastructure should be easy to keep working. Our CI and build pipeline use the same âbuild.pyâ script in the root of the repo that developers use locally to build & test.  Keep the product focused Last but certainly not least, is to avoid the extraneous. Every feature added should have a clear need and add significant value. This provides for a more streamlined & intuitive product for the customer, and a less complex codebase to do further development in.  The result Weâre pretty proud of the result. Itâs no exaggeration to say the new Azure Quantum Development Kit is 100x smaller, 100x faster, available on Windows, Mac, Linux, and the web, and is a greatly simplified user experience.  Size As outlined above, the existing QDK results in over 180MB of binaries locally once a project is fully built and all dependencies installed. The VSIX package for our new VS Code extension is currently around 700KB and includes everything needed for Q# development in VS Code. (If you âpip installâ our Python packages to work with Q# via Python, thatâs around another 1.3MB). Installation typically takes a couple of seconds with no other dependencies. If you have VS Code, (and Python/Jupyter if desired), youâre ready to install.  Speed We have examples of programs that would take minutes to compile in the existing QDK. Those same programs are now measured in milliseconds in the new QDK. The language service is so fast, most operations are done on every keystroke and feel instant. The simulator can run 1000s of âshotsâ per second for many common algorithms on a good laptop.  The build pipelines for the existing QDK take between 2 â 3 hours to complete, are fragile, and issues often require coordinated check-ins across multiple repos. For the new QDK, all code is in one repo, and we build, test, and push live to our online playground in around 10 mins on every commit to main. Our publishing pipeline uses largely the same script.  Weâve built an extremely fast & reliable installation, language service, compiler, and debugger. Oh, and it all works inside the browser too!  VS Code for the Web A couple of years ago VS Code introduced VS Code for the Web (https://code.visualstudio.com/docs/editor/vscode-web), with the ability to run the IDE in a browser with no local install, such as at https://vscode.dev or by pressing â.â when in a GitHub repo. By building our extension entirely as a web extension ALL our features run equally well in VS Code desktop or in the browser.  By way of example, the below screenshot shows loading the editor in the browser by visiting https://vscode.dev, running a Q# file under the debugger, viewing the quantum simulator output in the Debug Console, while also signed in to an Azure Quantum Workspace shown in the Explorer sidebar (to which the current program could be submitted) â all without anything needing to be installed on the local machine. We think the improvements in the user experience for the new QDK really are a quantum leap (bad pun intended).  Whatâs ahead This is an early preview, and we still have several features to add before we get to our âstableâ release, some of the main ones being: Multi-file support: For this preview all code for a Q# program needs to be in one source file. (With Q#, you can simply âconcatâ source files together if need be). Richer QIR support: This preview currently can compile programs for hardware that supports the QIR base-profile which, as the name suggests, provides for a basic level of capabilities. With some hardware starting to support more advanced capabilities (currently being specified in the QIR Adaptive Profile), we will be adding support for that also. (Note that running in the simulator isnât restricted to these profiles and can run any Q# code). Migration: Being not entirely backwards compatible with the existing QDK, we also have a lot of work to do on updating samples & documentation. (The âDifferences from the previous QDKâ page on our wiki will highlight changes and how to migrate code). Once the core product is solid, we have a laundry list of further features and Q# language improvements we want to get to, which you can view and contribute to on our GitHub repo.  The existing QDK (https://learn.microsoft.com/en-us/azure/quantum/install-overview-qdk) is still fully supported and should be used if the new QDK Preview doesnât meet your needs or is changing too frequently as we iterate towards our stable release.  Get involved! Weâd love for you to give it a try and give us your feedback. The installation guide and other getting started documentation is currently on our GitHub wiki at https://github.com/microsoft/qsharp/wiki/Installation. You can report any issues, weigh in on feature requests, or contribute code on that same GitHub repo.
