A little more than a week ago, the Israel Innovation Center was opened in the Peres Center for Peace in Jaffa. Many of Israel’s most famous citizens were in attendance, including Prime Minister Benjamin Netanyahu and President Reuven Rivlin, alongside the Center for Peace’s founder, former Prime Minister Shimon Peres (1). The center was established to “showcase [technological] achievements of the past” while also promoting peace through innovation in Israel and abroad (2). The opening of the center provides a good opportunity to consider where Israel stands in the contemporary tech and science landscape and what the center can do to bolster Israel’s position as an innovator in the coming years.
Israel is something of an oddity in the world of science and technology. It is a country with roughly the population of Virginia, situated in a particularly unstable region of the globe. While it is certainly a wealthy country, its per capita GDP is actually much lower than other comparable countries like Ireland or Austria. Yet despite all of this, Israel has made a plethora of contributions in a wide variety of fields from Cryptography to Medicine to Physics. Israeli citizens helped to create the encryption system RSA, the first USB flash drives, and Azilect, which is used to treat early stage Parkinson’s. Israel also has one of the most well educated populations in the world, with roughly 50% of the present population having attained some level of tertiary education. However, there are several factors that make business and research harder to start in Israel than it might be in other states. The first factor is the country’s location, both geographically and politically. Israel’s small size and population can make it hard for businesses and apps started there to reach a critical mass of customers needed for long term market viability. This problem is compounded by a lack of nearby markets to expand into as most of Israel’s neighbors are comparatively poor, unstable, and not particularly friendly towards Israel or its businesses. By contrast, small countries in the EU like Denmark can easily sell and expand into large neighboring markets like Germany and France thanks to the freedoms provided by the Union’s single market. Many of the problems that affect Israeli business also affect Israeli research by reducing possibilities for academic collaboration.
So what can the Israeli Innovation Center do to fix these problems? The biggest thing the center can do to aid in spurring innovation in Israel might be providing a conduit between Israel and the rest of the world. If the center can serve as a meeting point between innovators from Israel and other parts of the world, the cost of expanding and collaborating outside of Israel could be reduced. Beyond that, simply having an entity like the Center which can serve as a sort of dedicated advertiser for the successes and triumphs made by Israeli innovators could also help to attract more opportunities for future Israeli projects.
(1) Tepper, Fitz. "Israel Is Opening an Innovation Center to Showcase Israeli Technology and Inspire Young entrepreneurs." TechCrunch. TechCrunch, 22 July 2016. Web. 28 July 2016.
(2) "Launch of the Israeli Innovation Center." Peres Center for Peace. The Peres Center for Peace, 21 July 2016. Web. 29 July 2016.
Image: © Inna Felker | Dreamstime.com - <a href="https://www.dreamstime.com/editorial-photo-modern-futuristic-architecture-building-herzliya-israel-august-urban-landscape-image60495686#res14972580">Modern futuristic architecture building, Herzliya, Israel.</a>
Over the last decade, cloud computing has become both the bedrock upon which most modern tech is built and a potential security hazard as more and more sensitive applications and data are moved to the cloud. One group particularly interested in trying to fix potential security problems on the cloud is the US Defense Advanced Research Projects Agency (DARPA). In the last 5 years, DARPA’s Mission-oriented Resilient Clouds (MRC) program has been working to research and develop methods to increase the security and reliability of the cloud.
For the uninitiated, the cloud computing refers to the practice of using a distributed network of computers to perform various types of computation such as hosting websites, calculation, and financial transactions. What we refer to as “the cloud” is actually a network of millions of specialized computers housed together in buildings known as server farms. Groups can then purchase a certain amount of storage space or computing power from the owners of these server farms. It is important to note, however, that users of the cloud are not given the use of a specific machine as is the case with a traditional remote server. Rather, applications and processes are frequently run between multiple different machines. DARPA is concerned about the security risks in moving more and more government and particularly the Department of Defense applications and networks onto cloud-based systems. They claim that the diversity of applications running on the cloud, the homogeneity of the machines running cloud applications in server farms, and high degree of interconnectivity on cloud networks compared to traditional network have the potential to increase the danger of extremely debilitating cyberattacks (1). Such setups make it possible for attackers to breach a poorly secured application and then propagate an attack throughout the cloud at extremely high speeds (1). DARPA’s response has been the MRC program, which seeks to fund research to increase the security of the cloud. A number of research entities have conducted research and created software using the MRC project’s funding.
Two groups at Cornell and Johns Hopkins University have created several pieces of software with DARPA funding that seek to increase the security of the cloud. The first system, Vsync (previously known as Isis2), is a system developed by Cornell researcher Ken Birman intended to be an all-purpose tool for developing cloud applications (2). One particularly noteworthy thing about Vsync is that it was built to allow the movement and copying of large amounts of data between machines securely (2). This could help increase security by preventing hackers from corrupting data en route from one machine to another. By contrast, the ShadowDB system proposed by another group of Cornell researchers seeks to ensure that the contamination of a single machine on the cloud does not bring down the entire system by running redundant processes on different machines and checking the results for correctness while also checking code for correctness (3). Researchers at Johns Hopkins, meanwhile, have taken a different approach with Spine and Prime, which seek to securely transfer data between servers and use random number generators to create variants of the processes running on the machine (4). The process variation is particularly interesting, as it would mean that breaking a Prime routine on one server would not enable an attacker to break all the routines on all machines as each would run slightly differently.
Overall, the projects supported by DARPA do have the potential to improve the security of the cloud. Introducing redundancies to the system to ensure proper computation and creating variants of processes on servers will hopefully make life harder for any hackers trying to penetrate distributed systems. However, the effect of the MRC program will ultimately be measured by how broadly adopted its software ends up as well and its actual utility when exposed to the strains of real world use. It would not be unreasonable to expect big things from research coming out of DARPA, who previously helped lay the groundwork for computer networks and graphical user interfaces.
(1) Birman, Ken. "Vsync: Consistent Data Replication for Cloud Computing." CodePlex. December 22, 2015. Accessed July 7, 2016. http://vsync.codeplex.com/.
(2) Schiper, Nicholas, Vincent Rahli, Robert Van Renesse, Mark Bickford, and Robert L. Constable. ShadowDB: A Replicated Database on a Synthesized Consensus Core. Technical paper. Department of Computer Science, Cornell Univesity.
(3) Amir, Yair, Emily Wagner, and Amy Babay. "The Spines Messaging System." The Spines Messaging System. January 1, 2012. Accessed July 7, 2016. http://www.spines.org/.
(4) Amir, Yair, Jonathan Kirsch, and John Lanee. "Prime: Byzantine Replication Under Attack." Prime: Byzantine Replication Under Attack. May 4, 2010. Accessed July 7, 2016. http://www.spines.org/.
Image: © Pumai Vittayanukorn | Dreamstime.com - <a href="https://www.dreamstime.com/stock-photo-data-protection-cloud-computing-security-concept-image43928193#res14972580">Data protection, Cloud computing security concept</a>
Of the various areas of technological innovation being explored in 2016, quantum computing has had a particularly confusing journey and threatens to force significant changes in numerous spaces. Yet for all the consternation over what the fallout of the widespread adoption of quantum computers will be, the field is a tortuously slow journey from theory to prototype to market. Compare the development of quantum computing to the development of the cloud, which started around when companies like Amazon began selling the use of their data-centers in the mid-2000s. By 2010, the cloud was a ubiquitous feature of the modern computing landscape. Even virtual reality has had faster journey to the market since 2012 with the first prototype of the Oculus. So what factors have been constraining the development of fully functional quantum computers? Moreover, what can quantum computers do and how will we use them when they do arrive for consumer and business use?
First, it will be necessary to understand what quantum computing is and what it can be, as the concept can be hard to grasp even for computer scientists. In its simplest form, quantum computing refers to the practice of using phenomena from quantum physics to perform computation. In particular, quantum computers rely on the concepts of superposition and entanglement to perform computations that are impossible or extremely difficult for classical computers. Unlike classical computers, which store information and compute on arrays of distinct bits which can either be one or zero (on or off), quantum computers use arrays of quantum bits or qubits which can be both one and zero until observed, a phenomenon known as superposition. Additionally, when one sets up a system of qubits in a particular manner, they can become entangled, which allows for the storage of more information than would be possible with a group of classical bits of the same size (1). This combination of superposition and entanglement allow quantum computers to perform operations on a range of inputs in parallel on the same group of qubits (1). This unique variety of computing can be used for a number of applications for which classical computers are poorly suited. For one, quantum computers may be much better at simulating complex systems like the folding of proteins as they can operate on all inputs before giving the result for the correct input (1). More ominously however, quantum computing has the potential to break internet security by compromising certain commonly used encryption schemes. For example, the commonly used public key encryption scheme RSA is threatened by quantum computers because they can factor numbers’ orders of magnitude faster than traditional computers. This is a problem because RSA relies on the premise that large numbers are not easy to factor to ensure that attackers cannot use publicly shared keys to derive private information which can be used to decrypt encrypted messages.
It should be noted that while quantum computers have the potential to break encryption like RSA, current generation of quantum computers are not big or powerful enough to pose a threat to encryption systems. The largest number known to have been factored by a quantum computer (56153) is still 64 to 128 times smaller than the numbers used in RSA encryption keys (2). While it is relieving that quantum computers are not particularly large or powerful, it is also perplexing that quantum computers are so weak compared to their classical counterparts in terms of processing power. Part of this phenomenon can be explained by the fact that quantum computers are still fairly new. The first working quantum computers were developed less than two decades ago. By comparison, it took more than 30 years for classical computers to go from prototypes like the ENIAC and MANIAC I to widely available consumer models like the TRS-80 and Apple II.
While the slow growth in power and lack of widespread adoption of quantum computers may be explainable by the relative newness of the field, there are a few factors which are likely to hamper the growth in quantum computer processing power in the coming years. One hurdle faced in the campaign to create more powerful quantum computers is difficulty in maintaining quantum effects in larger systems. It becomes harder to keep a quantum computer’s qubits entangled and in a state of superposition as more qubits are added, as doing so increases the chance that other particles will interact with the system, forcing it into a definite state and losing the advantages that quantum effects give. In addition to the problem of scalability, quantum computers are also much harder to build. A qubit can theoretically be represented by any system which can have two states. In practice however, most quantum computers represent qubits using photons, flaws in the carbon lattices of diamonds, or the valence electrons of phosphorus, which all pose problems when thinking about how to cheaply manufacture qubits that can be placed on a computer chip. Quantum processors also suffer from the fact that it is harder to transfer information to and from them from memory. Compared to the processor of a classical computer, where ingoing and outgoing information is basically just electrical current (or the lack thereof), qubits composed of magnetic fields or microscopic particles use mechanisms for detecting and setting state that are significantly more complex and expensive.
Do all of these hurdles mean that striving to build quantum computers is a fruitless pursuit? Not necessarily. While progress towards building bigger, more powerful quantum computers has been slow, it is still being made. Just last year, D-Wave Systems revealed the first quantum processor with more than 1,000 qubits (3). While a far cry from the billions of bits that can be stored by the typical computer, it represents an impressive leap compared to the 3 qubit quantum computers created in the late 90s and early 2000s. So what will the future of quantum computers look like? If the history of classical computers is any indication, one would expect governments, universities, and very specific business interests to make use of the current generation of quantum computers, which are still quite bulky and require considerable space and resources. As quantum computer hardware manufacturers become better at building components and more programmers learn about and develop for quantum computers, quantum computers may eventually find use in the hands of the average consumer and businesses. It is still uncertain as to how the average users will access quantum computers. Since qubits require temperatures near absolute zero and high levels of stability and isolation to effectively utilize quantum effects, it may be untenable to perform quantum computations on a personal computer. Barring developments that allow quantum computers to function at room temperatures in everyday conditions, it seems more likely that consumers will have access to quantum computers through the cloud. Rather than trying to build a quantum chip that can be integrated into personal computers, it may be easier to set up data centers with thousands of quantum computers the use of which can be bought and distributed as needed by users. We are already seeing groups like IBM giving users access to quantum computer by allowing them to remote access their systems (4).
(1) Altepeter, Joseph B. "A Tale of Two Qubits: How Quantum Computers Work." Ars Technica. Conde Nast, 18 Jan. 2010. Web. 20 July 2016.
(2) "The Mathematical Trick That Helped Smash The Record For The Largest Number Ever Factorised By A..." Medium. A Medium Corporation, 02 Dec. 2014. Web. 22 July 2016.
(3) D-Wave. "Breaks the 1000 Qubit Quantum Computing Barrier." Dwavesys. D-Wave Systems, 22 June 2015. Web. 21 July 2016.
(4) IBM. IBM Makes Quantum Computing Available on IBM Cloud to Accelerate Innovation. IBM.com. International Business Machines Corporation, 4 May 2016. Web. 21 July 2016.
Image: © Welcomia | Dreamstime.com - <a href="https://www.dreamstime.com/royalty-free-stock-photos-nano-technology-image29230388#res14972580">Nano Technology</a>
Over the past 5 years, smartphone makers suing each other has become nearly as booming a business as making smartphones. Nearly every major maker and seller of smartphones has been involved in a suit alleging infringement of numerous patents covering various elements of the phones’ design. However, amidst the discussions of products being shut out of markets and billion-dollar settlements, it can be hard to understand what the actual technologies and concepts being fought over are. Hence, looking at the allegations of patent infringement in one lawsuit between Apple and Samsung, may prove helpful in understanding the smartphone “patent wars” and the wider discussion on intellectual property.
The first part of Apple’s lawsuit against Samsung covered the alleged infringement of US Patent No. 5,946,647 (owned by Apple), which lays claim to “a computer-based system for detecting structures in data and performing actions on detected structures (1).” Apple claimed their patent was infringed upon by Samsung’s built-in browser and messenger applications’ ability to perform actions on unique pieces of data such as dates, phone numbers, and email addresses embedded within data read by those applications (2). The court ruled that Samsung was not in violation of Apple’s patent, though not because Apple’s patent was ruled invalid, but because the program that analyzed data for the previously mentioned unique data structures was run locally on Samsung smartphones, rather than on a separate “analyzer server” which was listed as part of the system claimed by patent 5,946,647 (2). The second patent which Apple claimed Samsung’s devices infringed upon (US Patent No. 8,074,172), lays claim to “a method, system, and graphical user interface for providing word recommendations,” which is colloquially known as autocorrect or autocomplete (3). The third patent laid claim to a system for unlocking a touch device “via gestures performed on the touch-sensitive display (4).” Both of these patents were deemed invalid due to the existence of similar systems and the broad nature of the claims (2).
Looking at how the courts have ruled on Apple’s claims of patent infringement, one may to ask what is the purpose of these types of software patents . Unlike patents on physical goods like machines or pharmaceuticals, which typically cover specific designs or chemicals, software patents like the ones used in Apple v. Samsung lay claim to broad concepts. Moreover, unlike physical goods, which have costs associated with production and transportation, software makers incur virtually no costs in distributing their programs. As such, one may be able to justify patents on specific implementations of a software concept (i.e. the actual code in the software) on the grounds that such patents help to protect the investment made to implement said concept just as physical patents protect the investment to produce a physical invention. However, many of the patents used in the slew of smartphone lawsuits seem to be intended to protect smartphone makers’ market share rather than their investments .
(1) Miller, James R., Thomas Bonura, Nardi Bonnie, and David Wright. System and Method for Performing an Action on a Structure in Computer-generated Data. US Patent 5,946,647, filed February 1, 1996, and issued August 31, 1999.
(2) Apple Inc. v. Samsung Electronics Co., LTD. (United States District Court for the Northern District of California February 26, 2016) (United States Court of Appeals for the Federal Circuit, Dist. file).
(3) Kocienda, Kenneth and Bas Ording. Method, system, and graphical user interface for providing word recommendations. US Patent 8,074,172, filed December 6, 2011, and issued January 5, 2007.
(4) Chaudhri; Imran, Bas Ording, Freddy Allen Anzures, Marcel Van Os, Stephen O. Lemay, Scott Forstall, and Greg Christie. Unlocking a device by performing gestures on an unlock image. US Patent 8,046,721, filed October 25, 2011, and issued June 2, 2009.
(5) Santorelli, Michael J. "What Price Victory? Apple, Samsung, and the Legacy of the Smartphone Patent War - Morning Consult." Morning Consult. Morning Consult, 20 July 2015. Web. 28 June 2016.
Image: © Kheng Ho Toh | Dreamstime.com - <a href="https://www.dreamstime.com/stock-photography-global-copyright-image13539952#res14972580">Global Copyright</a>
In recent years, there has been a trend of hackers associated with foreign governments, particularly those of Russia and China, breaching the networks of US governmental bodies and stealing troves of data. Particularly, the recent acquisition of sensitive documents from the Democratic National Committee’s (DNC) network, supposedly by Russian government hackers, raises the questions about the techniques and motivations behind cyberwarfare against the US government.
There are a number of different methods by which hackers can attack systems. The most typical avenue for breaching a network is to acquire the credentials of a user who already has access to the network. The 2014-2015 breach of the Office of Personnel Management was conducted by acquiring access to the network via a stolen credential from a government contractor (1). Hackers can also access secure networks by using backdoors, avenues of access to servers not intended for use by the network’s regular users (2). Generally, systems running older software are at greater risk, as hackers and programmers have had more time to study and find flaws in such systems. This may be one reason why the government has become a target in recent years, as many government systems, including the ones involved in the OPM hacks, have older systems than private sector firms (1).
Considering the kinds of data stolen from governmental bodies, one might be inclined to ask what reason a hacker, affiliated with another government or not, would have to compromise or steal such information. The most obvious reason is the profitability such data might provide. Any network large enough will typically have information like addresses, social security numbers, or bank data, which can be sold for a price to interested parties. Even passwords can fetch a good price due to password reuse by many users. Beyond this, information grabbed from government servers may also have political value. As seen in the breach of the DNC, documents on political donors and confidential strategy reports can be embarrassing for governments or political parties (3). Finally, such information can have utility to blackmail people for further information (4).
Regardless of how hackers breach government systems or their intention within, the question still arises of who the hackers actually are. Frequently, they are claimed to be agents of foreign governments, particularly those of China and Russia, which is always denied. It is worth noting, though, that tracking and identifying hackers is actually a very difficult task. Typically, the only hard evidence of a hacker’s location is the IP address of the machine which breached the system, which one can use to try to track to a physical location (5). However, hackers can and frequently do use proxy computers to instigate attacks in order to hide their actual location (5). Thus, cybersecurity experts may instead need to look to the tools used by the hackers to try to find characteristics in variable names or the suspected type of keyboard used to write malicious code in order to link them to a specific group or location (6). Unfortunately, this approach runs into its own set of problems as hackers can account simply write code that makes them less distinguishable and harder to identify.
(1) Sternstein, Aliya, and Jack Moore. "Timeline: What We Know About the OPM Breach (UPDATED)." Nextgov. June 26, 2015. Accessed June 21, 2016. http://www.nextgov.com/cybersecurity/2015/06/timeline-what-we-know-about-opm-breach/115603/
(2) Johansson, Jesper. "Anatomy Of A Hack: How A Criminal Might Infiltrate Your Network." Anatomy Of A Hack: How A Criminal Might Infiltrate Your Network. 2008. Accessed June 21, 2016. https://technet.microsoft.com/en-us/magazine/2005.01.anatomyofahack.aspx.
(3) Koebler, Jason. "'Guccifer 2.0' Claims Responsibility for DNC Hack, Releases Docs to Prove It." Motherboard. June 15, 2016. Accessed June 21, 2016. http://motherboard.vice.com/read/guccifer-20-claims-responsibility-for-dnc-hack-releases-documents.
(4) Bennett, Brian, and Richard A. Serrano. "Chinese Hackers Sought Information to Blackmail U.S. Government Workers, Officials Believe." Los Angeles Times. June 5, 2015. Accessed June 23, 2016. http://www.latimes.com/nation/la-na-government-cyberattack-20150605-story.html.
(5) Greenemeier, Larry. "Seeking Address: Why Cyber Attacks Are So Difficult to Trace Back to Hackers." Scientific American. June 11, 2011. Accessed June 22, 2016. http://www.scientificamerican.com/article/tracking-cyber-hackers/.
(6) Glance, David. "How We Trace the Hackers behind a Cyber Attack." How We Trace the Hackers behind a Cyber Attack. December 4, 2015. Accessed June 23, 2016. http://phys.org/news/2015-12-hackers-cyber.html.
Image: © Alexandr Blinov | Dreamstime.com - <a href="https://www.dreamstime.com/editorial-image-steel-handcuffs-credit-card-rolls-russian-rubles-samara-russia-january-dollars-lying-computer-keyboard-image64857285#res14972580">Steel handcuffs, credit card and rolls of russian rubles</a>