Over the last decade, cloud computing has become both the bedrock upon which most modern tech is built and a potential security hazard as more and more sensitive applications and data are moved to the cloud. One group particularly interested in trying to fix potential security problems on the cloud is the US Defense Advanced Research Projects Agency (DARPA). In the last 5 years, DARPA’s Mission-oriented Resilient Clouds (MRC) program has been working to research and develop methods to increase the security and reliability of the cloud.
For the uninitiated, the cloud computing refers to the practice of using a distributed network of computers to perform various types of computation such as hosting websites, calculation, and financial transactions. What we refer to as “the cloud” is actually a network of millions of specialized computers housed together in buildings known as server farms. Groups can then purchase a certain amount of storage space or computing power from the owners of these server farms. It is important to note, however, that users of the cloud are not given the use of a specific machine as is the case with a traditional remote server. Rather, applications and processes are frequently run between multiple different machines. DARPA is concerned about the security risks in moving more and more government and particularly the Department of Defense applications and networks onto cloud-based systems. They claim that the diversity of applications running on the cloud, the homogeneity of the machines running cloud applications in server farms, and high degree of interconnectivity on cloud networks compared to traditional network have the potential to increase the danger of extremely debilitating cyberattacks (1). Such setups make it possible for attackers to breach a poorly secured application and then propagate an attack throughout the cloud at extremely high speeds (1). DARPA’s response has been the MRC program, which seeks to fund research to increase the security of the cloud. A number of research entities have conducted research and created software using the MRC project’s funding.
Two groups at Cornell and Johns Hopkins University have created several pieces of software with DARPA funding that seek to increase the security of the cloud. The first system, Vsync (previously known as Isis2), is a system developed by Cornell researcher Ken Birman intended to be an all-purpose tool for developing cloud applications (2). One particularly noteworthy thing about Vsync is that it was built to allow the movement and copying of large amounts of data between machines securely (2). This could help increase security by preventing hackers from corrupting data en route from one machine to another. By contrast, the ShadowDB system proposed by another group of Cornell researchers seeks to ensure that the contamination of a single machine on the cloud does not bring down the entire system by running redundant processes on different machines and checking the results for correctness while also checking code for correctness (3). Researchers at Johns Hopkins, meanwhile, have taken a different approach with Spine and Prime, which seek to securely transfer data between servers and use random number generators to create variants of the processes running on the machine (4). The process variation is particularly interesting, as it would mean that breaking a Prime routine on one server would not enable an attacker to break all the routines on all machines as each would run slightly differently.
Overall, the projects supported by DARPA do have the potential to improve the security of the cloud. Introducing redundancies to the system to ensure proper computation and creating variants of processes on servers will hopefully make life harder for any hackers trying to penetrate distributed systems. However, the effect of the MRC program will ultimately be measured by how broadly adopted its software ends up as well and its actual utility when exposed to the strains of real world use. It would not be unreasonable to expect big things from research coming out of DARPA, who previously helped lay the groundwork for computer networks and graphical user interfaces.
(1) Birman, Ken. "Vsync: Consistent Data Replication for Cloud Computing." CodePlex. December 22, 2015. Accessed July 7, 2016. http://vsync.codeplex.com/.
(2) Schiper, Nicholas, Vincent Rahli, Robert Van Renesse, Mark Bickford, and Robert L. Constable. ShadowDB: A Replicated Database on a Synthesized Consensus Core. Technical paper. Department of Computer Science, Cornell Univesity.
(3) Amir, Yair, Emily Wagner, and Amy Babay. "The Spines Messaging System." The Spines Messaging System. January 1, 2012. Accessed July 7, 2016. http://www.spines.org/.
(4) Amir, Yair, Jonathan Kirsch, and John Lanee. "Prime: Byzantine Replication Under Attack." Prime: Byzantine Replication Under Attack. May 4, 2010. Accessed July 7, 2016. http://www.spines.org/.
Image: © Pumai Vittayanukorn | Dreamstime.com - <a href="https://www.dreamstime.com/stock-photo-data-protection-cloud-computing-security-concept-image43928193#res14972580">Data protection, Cloud computing security concept</a>
Of the various areas of technological innovation being explored in 2016, quantum computing has had a particularly confusing journey and threatens to force significant changes in numerous spaces. Yet for all the consternation over what the fallout of the widespread adoption of quantum computers will be, the field is a tortuously slow journey from theory to prototype to market. Compare the development of quantum computing to the development of the cloud, which started around when companies like Amazon began selling the use of their data-centers in the mid-2000s. By 2010, the cloud was a ubiquitous feature of the modern computing landscape. Even virtual reality has had faster journey to the market since 2012 with the first prototype of the Oculus. So what factors have been constraining the development of fully functional quantum computers? Moreover, what can quantum computers do and how will we use them when they do arrive for consumer and business use?
First, it will be necessary to understand what quantum computing is and what it can be, as the concept can be hard to grasp even for computer scientists. In its simplest form, quantum computing refers to the practice of using phenomena from quantum physics to perform computation. In particular, quantum computers rely on the concepts of superposition and entanglement to perform computations that are impossible or extremely difficult for classical computers. Unlike classical computers, which store information and compute on arrays of distinct bits which can either be one or zero (on or off), quantum computers use arrays of quantum bits or qubits which can be both one and zero until observed, a phenomenon known as superposition. Additionally, when one sets up a system of qubits in a particular manner, they can become entangled, which allows for the storage of more information than would be possible with a group of classical bits of the same size (1). This combination of superposition and entanglement allow quantum computers to perform operations on a range of inputs in parallel on the same group of qubits (1). This unique variety of computing can be used for a number of applications for which classical computers are poorly suited. For one, quantum computers may be much better at simulating complex systems like the folding of proteins as they can operate on all inputs before giving the result for the correct input (1). More ominously however, quantum computing has the potential to break internet security by compromising certain commonly used encryption schemes. For example, the commonly used public key encryption scheme RSA is threatened by quantum computers because they can factor numbers’ orders of magnitude faster than traditional computers. This is a problem because RSA relies on the premise that large numbers are not easy to factor to ensure that attackers cannot use publicly shared keys to derive private information which can be used to decrypt encrypted messages.
It should be noted that while quantum computers have the potential to break encryption like RSA, current generation of quantum computers are not big or powerful enough to pose a threat to encryption systems. The largest number known to have been factored by a quantum computer (56153) is still 64 to 128 times smaller than the numbers used in RSA encryption keys (2). While it is relieving that quantum computers are not particularly large or powerful, it is also perplexing that quantum computers are so weak compared to their classical counterparts in terms of processing power. Part of this phenomenon can be explained by the fact that quantum computers are still fairly new. The first working quantum computers were developed less than two decades ago. By comparison, it took more than 30 years for classical computers to go from prototypes like the ENIAC and MANIAC I to widely available consumer models like the TRS-80 and Apple II.
While the slow growth in power and lack of widespread adoption of quantum computers may be explainable by the relative newness of the field, there are a few factors which are likely to hamper the growth in quantum computer processing power in the coming years. One hurdle faced in the campaign to create more powerful quantum computers is difficulty in maintaining quantum effects in larger systems. It becomes harder to keep a quantum computer’s qubits entangled and in a state of superposition as more qubits are added, as doing so increases the chance that other particles will interact with the system, forcing it into a definite state and losing the advantages that quantum effects give. In addition to the problem of scalability, quantum computers are also much harder to build. A qubit can theoretically be represented by any system which can have two states. In practice however, most quantum computers represent qubits using photons, flaws in the carbon lattices of diamonds, or the valence electrons of phosphorus, which all pose problems when thinking about how to cheaply manufacture qubits that can be placed on a computer chip. Quantum processors also suffer from the fact that it is harder to transfer information to and from them from memory. Compared to the processor of a classical computer, where ingoing and outgoing information is basically just electrical current (or the lack thereof), qubits composed of magnetic fields or microscopic particles use mechanisms for detecting and setting state that are significantly more complex and expensive.
Do all of these hurdles mean that striving to build quantum computers is a fruitless pursuit? Not necessarily. While progress towards building bigger, more powerful quantum computers has been slow, it is still being made. Just last year, D-Wave Systems revealed the first quantum processor with more than 1,000 qubits (3). While a far cry from the billions of bits that can be stored by the typical computer, it represents an impressive leap compared to the 3 qubit quantum computers created in the late 90s and early 2000s. So what will the future of quantum computers look like? If the history of classical computers is any indication, one would expect governments, universities, and very specific business interests to make use of the current generation of quantum computers, which are still quite bulky and require considerable space and resources. As quantum computer hardware manufacturers become better at building components and more programmers learn about and develop for quantum computers, quantum computers may eventually find use in the hands of the average consumer and businesses. It is still uncertain as to how the average users will access quantum computers. Since qubits require temperatures near absolute zero and high levels of stability and isolation to effectively utilize quantum effects, it may be untenable to perform quantum computations on a personal computer. Barring developments that allow quantum computers to function at room temperatures in everyday conditions, it seems more likely that consumers will have access to quantum computers through the cloud. Rather than trying to build a quantum chip that can be integrated into personal computers, it may be easier to set up data centers with thousands of quantum computers the use of which can be bought and distributed as needed by users. We are already seeing groups like IBM giving users access to quantum computer by allowing them to remote access their systems (4).
(1) Altepeter, Joseph B. "A Tale of Two Qubits: How Quantum Computers Work." Ars Technica. Conde Nast, 18 Jan. 2010. Web. 20 July 2016.
(2) "The Mathematical Trick That Helped Smash The Record For The Largest Number Ever Factorised By A..." Medium. A Medium Corporation, 02 Dec. 2014. Web. 22 July 2016.
(3) D-Wave. "Breaks the 1000 Qubit Quantum Computing Barrier." Dwavesys. D-Wave Systems, 22 June 2015. Web. 21 July 2016.
(4) IBM. IBM Makes Quantum Computing Available on IBM Cloud to Accelerate Innovation. IBM.com. International Business Machines Corporation, 4 May 2016. Web. 21 July 2016.
Image: © Welcomia | Dreamstime.com - <a href="https://www.dreamstime.com/royalty-free-stock-photos-nano-technology-image29230388#res14972580">Nano Technology</a>
Over the past 5 years, smartphone makers suing each other has become nearly as booming a business as making smartphones. Nearly every major maker and seller of smartphones has been involved in a suit alleging infringement of numerous patents covering various elements of the phones’ design. However, amidst the discussions of products being shut out of markets and billion-dollar settlements, it can be hard to understand what the actual technologies and concepts being fought over are. Hence, looking at the allegations of patent infringement in one lawsuit between Apple and Samsung, may prove helpful in understanding the smartphone “patent wars” and the wider discussion on intellectual property.
The first part of Apple’s lawsuit against Samsung covered the alleged infringement of US Patent No. 5,946,647 (owned by Apple), which lays claim to “a computer-based system for detecting structures in data and performing actions on detected structures (1).” Apple claimed their patent was infringed upon by Samsung’s built-in browser and messenger applications’ ability to perform actions on unique pieces of data such as dates, phone numbers, and email addresses embedded within data read by those applications (2). The court ruled that Samsung was not in violation of Apple’s patent, though not because Apple’s patent was ruled invalid, but because the program that analyzed data for the previously mentioned unique data structures was run locally on Samsung smartphones, rather than on a separate “analyzer server” which was listed as part of the system claimed by patent 5,946,647 (2). The second patent which Apple claimed Samsung’s devices infringed upon (US Patent No. 8,074,172), lays claim to “a method, system, and graphical user interface for providing word recommendations,” which is colloquially known as autocorrect or autocomplete (3). The third patent laid claim to a system for unlocking a touch device “via gestures performed on the touch-sensitive display (4).” Both of these patents were deemed invalid due to the existence of similar systems and the broad nature of the claims (2).
Looking at how the courts have ruled on Apple’s claims of patent infringement, one may to ask what is the purpose of these types of software patents . Unlike patents on physical goods like machines or pharmaceuticals, which typically cover specific designs or chemicals, software patents like the ones used in Apple v. Samsung lay claim to broad concepts. Moreover, unlike physical goods, which have costs associated with production and transportation, software makers incur virtually no costs in distributing their programs. As such, one may be able to justify patents on specific implementations of a software concept (i.e. the actual code in the software) on the grounds that such patents help to protect the investment made to implement said concept just as physical patents protect the investment to produce a physical invention. However, many of the patents used in the slew of smartphone lawsuits seem to be intended to protect smartphone makers’ market share rather than their investments .
(1) Miller, James R., Thomas Bonura, Nardi Bonnie, and David Wright. System and Method for Performing an Action on a Structure in Computer-generated Data. US Patent 5,946,647, filed February 1, 1996, and issued August 31, 1999.
(2) Apple Inc. v. Samsung Electronics Co., LTD. (United States District Court for the Northern District of California February 26, 2016) (United States Court of Appeals for the Federal Circuit, Dist. file).
(3) Kocienda, Kenneth and Bas Ording. Method, system, and graphical user interface for providing word recommendations. US Patent 8,074,172, filed December 6, 2011, and issued January 5, 2007.
(4) Chaudhri; Imran, Bas Ording, Freddy Allen Anzures, Marcel Van Os, Stephen O. Lemay, Scott Forstall, and Greg Christie. Unlocking a device by performing gestures on an unlock image. US Patent 8,046,721, filed October 25, 2011, and issued June 2, 2009.
(5) Santorelli, Michael J. "What Price Victory? Apple, Samsung, and the Legacy of the Smartphone Patent War - Morning Consult." Morning Consult. Morning Consult, 20 July 2015. Web. 28 June 2016.
Image: © Kheng Ho Toh | Dreamstime.com - <a href="https://www.dreamstime.com/stock-photography-global-copyright-image13539952#res14972580">Global Copyright</a>