Researcher builds system to protect against malicious insiders – ComputerWorld

Published by in From the WWW on October 19th, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Algorithms to spot attacks coming from inside the network gets Army support.

Credit: Thinkstock. Algorithms to spot attacks coming from inside the network gets Army support. By Sharon Gaudin. ComputerWorld

Credit: Thinkstock. Algorithms to spot attacks coming from inside the network gets Army support. By Sharon Gaudin. ComputerWorld

When an employee turns on his own company, the results — damaged networks, data theft and even work stoppage — could be devastating. It could rock the company even more than an outside attack because the insider knows where sensitive data is kept, what the passwords are and exactly how to hurt the company the most.

That’s the driving force behind the work that Daphne Yao, associate professor of computer science at Virginia Tech, is doing on cybersecurity. Yao, who received an NSF Career award for her human-behavior inspired malware detection work, is developing algorithms that will alert companies when an employee might be acting maliciously on their network.

Read the full article/reproduced from ComputerWorld

And the Army Research Office has awarded her $150,000 to continue her research into finding new ways to detect anomalies caused by system compromises and malicious insiders.

“The challenge is to understand the intention of the user and what the user is trying to do,” Yao said. “Most are doing legitimate work and they’re working their own project and minding their own business. You need a detection system that can guess what the user is trying to do.”

The crux of Yao’s work is to figure out which employees are simply downloading sensitive files or logging onto the network in the middle of the night because they’re trying to get their work done and which employees may be doing the same things because they’re trying to sell proprietary information or crash the network.

According to a 2012 Symantec report, 60% of companies said they had experienced attacks on their systems to steal proprietary information. The most frequent perpetrators were current or former employees or partners in trusted relationships.

In 1996, for instance, a network administrator at Omega Engineering Inc. planted a software time bomb that eradicated all the programs that ran the company’s manufacturing operations at its Bridgeport, N.J. plant.

The trusted IT administrator, Tim Lloyd, effectively stopped the manufacturing company from being able to manufacture, causing the company $12 million in damages and its footing in the high-tech instrument and measurement market. Eighty workers lost their jobs as a result.

Lloyd was tried and convicted of computer sabotage in federal court.

More recently, in 2013 Edward Snowden leaked classified documents about global surveillance programs that he acquired while working as an NSA contractor.

The same year, Pfc. Bradley Manning, an Army intelligence analyst, was sentenced to 35 years for leaking the largest cache of classified documents in U.S. history.

These are the kinds of insider attacks Yao is working to stop.

The Army Research Office did not respond to a request for comment, but Dan Olds, an analyst with The Gabriel Consulting Group, said he’s not surprised that the military is supporting research into detecting insider threats.

“The U.S. military is very concerned about security these days,” added Olds. “The Bradley Manning leaks highlighted the massive damage that even a lowly Pfc can wreak if given access to a poorly secured IT infrastructure. The Snowden and Manning leaks have had a very severe impact on U.S. intelligence activities, disclosing not only the information gathered, but also showing the sources and methods used to get US intelligence data.”

He also said insider-based attacks normally may not get as much media attention as most hacks, but can potentially cause much greater damage since the attacker at least knows where the keys to the castle are hidden. And if that attacker works in IT, he or she might even have the keys.

“Insider threats are many times the most devastating, as they are the least expected,” said Patrick Moorhead, an analyst with Moor Insights & Strategy. “Companies spend most of their security time and money guarding against external threats…. So that sometimes leaves the inside exposed.”

To combat this, Yao is combining big data, analytics and security to design algorithms that focus on linking human activities with network actions.

Continue reading the full article full article at /reproduced from ComputerWorld

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Brown Dog digs into the deep, dark web – GCN

Published by in From the WWW on October 19th, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Led by Kenton McHenry and Jong Lee of the Image and Spatial Data Analysis division at the National Center for Supercomputing Application (NCSA) at the University of Illinois at Urbana-Champaign, Brown Dog seeks to develop a service that will make uncurated data accessible.

Led by Kenton McHenry and Jong Lee of the Image and Spatial Data Analysis division at the National Center for Supercomputing Application (NCSA) at the University of Illinois at Urbana-Champaign, Brown Dog seeks to develop a service that will make uncurated data accessible.

Unstructured data is the bane of researchers everywhere. Although casual Googlers may be frustrated by not being able to open online files, researchers often need to dig into data trapped in outdated formats and uncurated collections with little or no metadata. And according to IDC, up to 90 percent of big data is “dark,” meaning the contents of such files cannot be easily accessed.

Thus, the Brown Dog solution to a long-tail problem. Read the full article/reproduced from GCN

Led by Kenton McHenry and Jong Lee of the Image and Spatial Data Analysis division at the National Center for Supercomputing Application (NCSA) at the University of Illinois at Urbana-Champaign, Brown Dog seeks to develop a service that will make uncurated data accessible.

“The information age has made it easy for anyone to create and share vast amounts of digital data, including unstructured collections of images, video and audio as well as documents and spreadsheets,” said McHenry. “But the ability to search and use the contents of digital data has become exponentially more difficult.”

Need expertise with Cloud / Internet Scale computing / Hadoop / Big Data / Algorithms / Architectures etc. ? Contact me - i can help – this is one of my primary expertise.

Brown Dog is working to change that. Recipients in 2013 of a $10 million, five-year award from the National Science Foundation, the UI team recently demonstrated two services to make the contents of uncurated data collections accessible.

The first, called Data Access Proxy (DAP), transforms unreadable files into readable ones by linking together a series of computing and translational operations behind the scenes.

Continue reading the full article/reproduced from GCN

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Gartner lays out its top 10 tech trends for 2015 – ComputerWorld

Published by in From the WWW on October 12th, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Credit: Nemo via Pixabay / Thinkstock.  By Patrick Thibodeau. Computerworld | Oct 7, 2014 12:45 PM PT

Credit: Nemo via Pixabay / Thinkstock. By Patrick Thibodeau. Computerworld | Oct 7, 2014 12:45 PM PT

Here’s the Gartner list for 2015, reproduced from ComputerWorld

1: Computing Everywhere. To Gartner, this simply means ubiquitous access to computing capabilities. Intelligent screens and connected devices will proliferate, and will take many forms, sizes and interaction styles.

2: The Internet of Things (IoT).  IT managers to experiment, get ideas going and empower individuals in IT organizations to develop uses for connected devices and sensors.

3: 3D printing. The technology has been around since 1984, but is now maturing and shipments are on the rise. While consumer 3D printing gets a lot of attention, it’s really the enterprise use that can deliver value.

4: Advanced, Pervasive and Invisible Analytics. Every application is an analytical app today.

5: Context Rich Systems. Knowing the user, the location, what they have done in the past, their preferences, social connections and other attributes all become inputs into applications.

6: Smart Machines. Example, global mining company Rio Tinto which operates autonomous trucks, to show the role smart machines will play.

7: Cloud and Client Computing. This highlights the central role of the cloud. An application will reside in a cloud, and it will be able to span multiple clients.

8: Software Defined Applications and Infrastructure. IT can’t work on hard coded, pre-defined elements; it needs to be able to dynamically assemble infrastructure.

9: Web-Scale IT. This is akin to adopting some of the models used by large cloud providers, including their risk-embracing culture and collaborative alignments.

10: Security. In particular, Gartner envisions more attention to application self-protection.

Here’s the Gartner list for 2015, reproduced from ComputerWorld

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

The Big Data Disruption – HortonWorks

Published by in From the WWW on October 9th, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Apache Hadoop didn’t disrupt the datacenter, the data did.

The explosion of new types of data in recent years – from inputs such as the web and connected devices, or just sheer volumes of records – has put tremendous pressure on the EDW.

Need expertise with Cloud / Internet Scale computing / Hadoop / Big Data / Algorithms / Architectures etc. ? Contact me - i can help – this is one of my primary expertise.


 

ServerLogs.pngSocial Media Data: Win customers’ hearts: With Hadoop, you can mine Twitter, Facebook and other social media conversations for sentiment data about you and your competition, and use it to make targeted, real-time, decisions that increase market share. More »

ServerLogs.pngServer Log Data: Fortify security and compliance: Security breaches happen. And when they do, your server logs may be your best line of defense. Hadoop takes server-log analysis to the next level by speeding and improving security forensics and providing a low cost platform to show compliance.. More »

Clickstream.pngWeb Clickstream Data: Show them the way: How do you move customers on to bigger things—like submitting a form or completing a purchase? Get more granular with customer segmentation. Hadoop makes it easier to analyze, visualize and ultimately change how visitors behave on your website. More »

Sensor.pngMachine and Sensor Data: Gain insight from your equipment: Your machines know things. From out in the field to the assembly line floor—machines stream low-cost, always-on data. Hadoop makes it easier for you to store and refine that data and identify meaningful patterns, providing you with the insight to make proactive business decisions. More »

Geolocation.pngGeolocation Data: Profit from predictive analytics: Where is everyone? Geolocation data is plentiful, and that’s part of the challenge. The costs to store and process voluminous amounts of data often outweigh the benefits. Hadoop helps reduce data storage costs while providing value driven intelligence from asset tracking to predicting behavior to enable optimization.

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Distributed, ‘artificial’ intelligence and machine perception – CARACaS – IEEE Spectrum

Published by in From the WWW, Snippet on October 5th, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Image: U.S. Navy.

Image: U.S. Navy.

A fleet of U.S. Navy boats approached an enemy vessel like sharks circling their prey. The scene might not seem so remarkable compared to any of the Navy’s usual patrol activities, but in this case, part of an exercise conducted by the U.S. Office of Naval Research (ONR), the boats operated without any direct human control: they acted as a robot boat swarm. The tests on Virginia’s James River this past summer represented the first large-scale military demonstration of a swarm of autonomous boats designed to overwhelm enemies. This capability points to a future where the U.S. Navy and other militaries may deploy underwater, surface, and flying robotic vehicles to defend themselves or attack a hostile force. “What’s new about the James River test was having five USVs [unmanned surface vessels] operating together with no humans on board,” said Robert Brizzolara, an ONR program manager.

Read the original/Reproduced from IEEE Spectrum: By Jeremy Hsu

Need expertise with Machine Learning / Internet Scale computing / Hadoop / Big Data / Algorithms / Architectures etc. ? Contact me - i can help – this is one of my primary expertise.

In the test, five robot boats practiced an escort mission that involved protecting a main ship against possible attackers. To command the boats, the Navy use a system called the Control Architecture for Robotic Agent Command and Sensing (CARACaS). The system not only steered the autonomous boats but also coordinated its actions with other vehicles—a larger group of manned and remotely-controlled vessels

A fleet of U.S. Navy boats approached an enemy vessel like sharks circling their prey. The scene might not seem so remarkable compared to any of the Navy’s usual patrol activities, but in this case, part of an exercise conducted by the U.S. Office of Naval Research (ONR), the boats operated without any direct human control: they acted as a robot boat swarm.

The tests on Virginia’s James River this past summer represented the first large-scale military demonstration of a swarm of autonomous boats designed to overwhelm enemies. This capability points to a future where the U.S. Navy and other militaries may deploy underwater, surface, and flying robotic vehicles to defend themselves or attack a hostile force.

“What’s new about the James River test was having five USVs [unmanned surface vessels] operating together with no humans on board,” said Robert Brizzolara, an ONR program manager. In the test, five robot boats practiced an escort mission that involved protecting a main ship against possible attackers. To command the boats, the Navy use a system called the Control Architecture for Robotic Agent Command and Sensing (CARACaS). The system not only steered the autonomous boats but also coordinated its actions with other vehicles—a larger group of manned and remotely-controlled vessels. Brizzolara said the CARACaS system evolved from hardware and software originally used in NASA’s Mars rover program starting 11 years ago. Each robot boat transmits its radar views to the others so the group shares the same situational awareness. They’re also continually computing their own paths to navigate around obstacles and act in a cooperatively manner.

Navy researchers installed the system on regular 7-foot and 11-foot boats and put them through a series of exercises designed to test behaviors such as escort and swarming attack. The boats escorted a manned Navy ship before breaking off to encircle a vessel acting as a possible intruder. The five autonomous boats then formed a protective line between the intruder and the ship they were protecting.

Photo: John F. Williams/U.S. Navy. An unmanned boat operates autonomously during an Office of Naval Research demonstration of swarm boat technology on the James River in Newport News, Va.

Photo: John F. Williams/U.S. Navy. An unmanned boat operates autonomously during an Office of Naval Research demonstration of swarm boat technology on the James River in Newport News, Va.

Such robotic swarm technology could transform modern warfare for the U.S. Navy and the rest of the U.S. military by reducing the risk to human personnel. Smart robots and drones that don’t require close supervision could also act as a “force multiplier” consisting of relatively cheap and disposable forces—engaging more enemy targets and presenting more targets for enemies to worry about.

“Numbers may once again matter in warfare in a way they have not since World War II, when the U.S. and its allies overwhelmed the Axis powers through greater mass,” wrote Paul Scharre, a fellow at the Center for a New American Security, a military research institution in Washington, D.C., in an upcoming report titled “Robotics on the Battlefield Part II: The Coming Swarm.”

“Qualitative superiority will still be important, but may not be sufficient alone to guarantee victory,” Scharre wrote. “Uninhabited systems in particular have the potential to bring mass back to the fight in a significant way by enabling the development of swarms of low-cost platforms.”

The Navy does not have a firm timeline for when such robot swarms could become operational. For now, ONR researchers hope to improve the autonomous system in terms of its ability to “see” its surroundings using different sensing technologies. They also want to improve how the boats navigate autonomously around obstacles, even in the most unexpected situations that human programmers haven’t envisioned. But the decision to have such robot boats open fire upon enemy targets will still rest with human sailors.

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

New frontier in error-correcting codes – MIT News

Published by in From the WWW, Snippet on October 5th, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Illustration: Jose-Luis Olivares/MIT

Illustration: Jose-Luis Olivares/MIT

Coding scheme for interactive communication is the first to near optimality on three classical measures.

Error-correcting codes are one of the glories of the information age: They’re what guarantee the flawless transmission of digital information over the airwaves or through copper wire, even in the presence of the corrupting influences that engineers call “noise.”

But classical error-correcting codes work best with large chunks of data: The bigger the chunk, the higher the rate at which it can be transmitted error-free. In the Internet age, however, distributed computing is becoming more and more common, with devices repeatedly exchanging small chunks of data over long periods of time.

Larry Hardesty | MIT News Office : October 2, 2014. Read the full/reproduced from MIT News

So for the last 20 years, researchers have been investigating interactive-coding schemes, which address the problem of long sequences of short exchanges. Like classical error-correcting codes, interactive codes are evaluated according to three criteria: How much noise can they tolerate? What’s the maximum transmission rate they afford? And how time-consuming are the encoding and decoding processes?

At the IEEE Symposium on Foundations of Computer Science this month, MIT graduate students past and present will describe the first interactive coding scheme to approach the optimum on all three measures.

“Previous to this work, it was known how to get two out of three of these things to be optimal,” says Mohsen Ghaffari, a graduate student in electrical engineering and computer science and one of the paper’s two co-authors. “This paper achieves all three of them.”

Vicious noise

Moreover, where Claude Shannon’s groundbreaking 1948 analysis of error-correcting codes considered the case of random noise, in which every bit of transmitted data has the same chance of being corrupted, Ghaffari and his collaborator — Bernhard Haeupler, who did his graduate work at MIT and is now an assistant professor at Carnegie Mellon University — consider the more stringent case of “adversarial noise,” in which an antagonist is trying to interfere with transmission in the most disruptive way possible.

“We don’t know what type of random noise will be the one that actually captures reality,” Ghaffari explains. “If we knew the best one, we would just use that. But generally, we don’t know. So you try to generate a coding that is as general as possible.” A coding scheme that could thwart an active adversary would also thwart any type of random noise.

Error-correcting codes — both classical and interactive — work by adding some extra information to the message to be transmitted. They might, for instance, tack on some bits that describe arithmetic relationships between the message bits. Both the message bits and the extra bits are liable to corruption, so decoding a message — extracting the true sequence of message bits from the sequence that arrives at the receiver — is usually a process of iterating back and forth between the message bits and the extra bits, trying to iron out discrepancies.

In interactive communication, the maximum tolerable error rate is one-fourth: If the adversary can corrupt more than a quarter of the bits sent, perfectly reliable communication is impossible. Some prior interactive-coding schemes, Ghaffari explains, could handle that error rate without requiring too many extra bits. But the decoding process was prohibitively complex.

Making a list

To keep the complexity down, Ghaffari and Haeupler adopted a technique called list decoding. Rather than iterating back and forth between message bits and extra bits until the single most probable interpretation emerges, their algorithm iterates just long enough to create a list of likely candidates. At the end of their mutual computation, each of the interacting devices may have a list with hundreds of entries.

But each device, while it has only imperfect knowledge of the messages sent by the other, has perfect knowledge of the messages it sent. So if, at the computation’s end, the devices simply exchange lists, each has enough additional information to zero in on the optimal decoding.

The maximum tolerable error rate for an interactive-coding scheme — one-fourth — is a theoretical result. The minimum length of an encoded message and the minimum decoding complexity, on the other hand, are surmises based on observation.

But Ghaffari and Haeupler’s decoding algorithm is nearly linear, meaning that its execution time is roughly proportional to the length of the messages exchanged.

“It is optimal in the sense that it is linear,” says Mark Braverman, an assistant professor of computer science at Princeton University who has also worked on interactive coding. “That’s an important benchmark.”

But linear relationships are still defined by constants: y = x is a linear relationship, but so is y = 1,000,000,000x. A linear algorithm that takes an extra second of computation for each additional bit of data it considers isn’t as good as a linear algorithm that takes an extra microsecond.

“We still need to worry a little bit about constants,” Braverman says. “But before you can worry about constants, you have to know that there is a constant-rate scheme. This is very nice progress and a prerequisite to asking those next questions.”

. . .

Read the full/reproduced from MIT News

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Augmented Reality – CACM

Published by in From the WWW, Snippet on October 3rd, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

My attention was recently drawn to a remarkable seven-minute TED video by Louie Schwartzberga that reinforced for me the power of technology to adapt to the limitations of our human perceptions. With the aid of technology, often digital in nature and often involving some serious computation, we can perceive that which is too fast, too slow, too big, too small, too diverse, and too high or low (as in frequency). As Schwartzberg’s video illustrates, we can use time-lapse photography to watch processes too slow to perceive or high-speed photography to make visible that which is too fast for the human eye to see. We can downshift or upshift frequencies to make things audible that we would otherwise not detect: the low-frequency communication of elephantsb and the high frequencies generated by bats and pest-control devices. We can shift or detect high-energy and high-frequency photons, such as X-rays, and make them visible to the human eye. We can take images in ultraviolet or infrared that our eyes cannot see but our instruments can, and thus make them visible.

By Vinton G. Cerf
Communications of the ACM, Vol. 57 No. 9, Page 7
10.1145/2656433

Reproduced/Read the original from CACM

Anyone who has watched a time-lapse film of flowers opening or mushrooms growing or vines climbing can appreciate how dramatically the time-lapse images help us appreciate and understand processes that take place so slowly that we do not see them as dynamic. I recall visiting a rain forest in Irian Jaya (the western half of Papua New Guinea) where our guide explained the long, slow battle between the trees and the climbing vines that, ultimately, throttled the trees over a period of years. I recall when my son, David, suggested a 100-year project to photograph, in time-lapse, a forest’s vivid story. It would be quite an interesting experience to watch the slow, titanic battles for control of the upper canopy and the survival and regeneration of the ground-hugging brush over the course of decades. It would be a technical challenge to ensure the equipment stayed functional, but one could use radio transmission to capture the images as long as the cameras were in operating condition. Similar tactics have been used to observe, on a continuous basis, areas not friendly to human habitation such as winters at the poles.

The rise of interest in “big data” has spurred a concurrent interest in visualization of collections of digital information, looking for patterns more easily recognized by humans than by computer algorithms. Creation of overlays of multiple data sources on Google Earth, correlated as to time and geographic location, also have served as an organized way to visualize and experience information we could not naturally observe with our human senses. Similar methods have brought visibility to the distribution of dark matter in the universe by inferring its existence and presence through its gravitational effects.

As our computational tools become more and more powerful, we can anticipate that our growing knowledge of the mechanics of our world will allow us to use simulation to visualize, understand, and even design processes that we could only crudely imagine before. The 2013 Nobel Prize for Chemistry went to Martin Karplus, Michael Levitt, and Arieh Warshel “for the development of multiscale models for complex chemical systems.” This is computational chemistry at its best and it shows how far we have come with tools that depend upon significant computing power available in this second decade of the 21st century. Indeed, we hear, more and more, of computational physics, biology, linguistics, exegesis, and comparative literature as fields well outside the traditional numerical analysis and programming disciplines typically associated with computer science. Computation has become an infrastructure for the pursuit of research in a growing number of fields of science and technology, including sociology, economics, and behavioral studies.

Reproduced/Read the original from CACM

One can only speculate what further accumulation of digitized data, computational power, storage, and models will bring in the future. The vast troves of data coming from the Large Hadron Collider, the Hubble, and future James Webb telescopes (among others), and the NSF National Ecological Observation Network (NEON) programc will be the sources for visualization, correlation, and analysis in the years ahead. Whoever thinks computer science is boring has not been paying attention!

Author

Vinton G. Cerf is vice president and Chief Internet Evangelist at Google. He served as ACM president from 2012–2014.

Back to Top

Footnotes

a. https://www.youtube.com/watch?v=FiZqn6fV-4Y

b. https://www.youtube.com/watch?v=YfHO6bM6V8k

c. http://www.nsf.gov/funding/pgm_summ.jsp and http://www.neoninc.org/

Copyright held by author. The Digital Library is published by the Association for Computing Machinery. Copyright © 2014 ACM, Inc.

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Let’s go round again: Google unleashes new price cuts for Compute Engine – CCN

Published by in From the WWW, Snippet on October 3rd, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

By James Bourne 02 October 2014, 12:12 p.m.. Picture credit: Robert Scoble/Flickr

By James Bourne 02 October 2014, 12:12 p.m. Picture credit: Robert Scoble/Flickr

Google has announced the latest price drop on its Compute Engine cloud infrastructure, keeping in line with its Moore’s Law theory of cloud pricing.

“As predicted by Moore’s Law, we can now lower prices again”, wrote Urs Hölzle, Google SVP technical infrastructure in a company blog post. “Effective immediately, we are cutting prices of Google Compute Engine by approximately 10% for all instance types in every region.

“These cuts are a result of increased efficiency in our data centres as well as failing hardware costs, allowing us to pass on lower prices to our customers,” he added.

Read the original/reproduced from CloudComputinNews

The latest price drops are likely to set off similar attacks from the competition, namely Amazon Web Services (AWS) and Microsoft Azure. Back in March Google set things off by slashing costs on Compute Engine, dropped by a third, and a two thirds drop for Cloud Storage, with Hölzle saying at the time you needed a PhD to work out the best option.

need expertise on my core areas (or other IT disciplines)? contact me, i can help! this is a closed community. if you are a member, please login, or send a request for creating an account.

Amazon and Microsoft swiftly followed suit. At AWS Summits in San Francisco, SVP Andy Jassy noted that lowering prices was not new for Amazon, although admitting previous price drops came largely in the absence of competitive pressure.

Low prices don’t mean anything unless you’ve got customer support, mind you, and Google also took the time to focus on a couple of case studies. The Snapchat case study got wheeled out again – the search giant is fond of telling anyone who would listen about that – while two more interesting customers got mentions. Google Compute Engine powers Workiva, which processes financial reports for 60% of the Fortune 500 saving nearly $1m annually, and Coca Cola, running the Happiness Flag campaign for the World Cup.

Last month Google announced a scheme for $100,000 (£61,500) in Cloud Platform credits to eligible startups, again seemingly taking a shot across the bows of AWS and its Portfolio Package initiative.

Read the original/reproduced from CloudComputinNews

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Putting the squeeze on quantum information – Canadian Institute for Advanced Research

Published by in From the WWW, Snippet on September 29th, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Quantum Computing

Quantum Computing

CIFAR researchers have shown that information stored in quantum bits can be exponentially compressed without losing information. The achievement is an important proof of principle, and could be useful for efficient quantum communications and information storage. Compression is vital for modern digital communication. It helps movies to stream quickly over the Internet, music to fit into digital players, and millions of telephone calls to bounce off of satellites and through fibre optic cables.

But it has not been clear if information stored in quantum bits, or qubits, could likewise be compressed. A new paper from Aephraim M. Steinberg (University of Toronto), a senior fellow in CIFAR’s program in Quantum Information Science, shows that quantum information stored in a collection of identically prepared qubits can be perfectly compressed into exponentially fewer qubits.

Read the original/reproduced from EurekAlert!

Digital compression in the world of classical information theory is fairly straightforward. As a simple example, if you have a string of 1,000 zeros and ones and are only interested in how many zeros there are, you can simply count them and then write down the number.

In the quantum world it’s more complicated. A qubit can be in a “superposition” between both zero and one until you measure it, at which point it collapses to either a zero or a one. Not only that, but you can extract different values depending on how you make the measurement. Measured one way, a qubit might reveal a value of either zero or one. Measured another way it might show a value of either plus or minus.

These qualities open up huge potential for subtle and powerful computing. But they also mean that you don’t want to collapse the quantum state of the qubit until you’re ready to. Once you’ve made a single measurement, any other information you might have wanted to extract from the qubit disappears.

You could just store the qubit until you know you’re ready to measure its value. But you might be dealing with thousands or millions of qubits.

“Our proposal gives you a way to hold onto a smaller quantum memory but still have the possibility of extracting as much information at a later date as if you’d held onto them all in the first place,” Steinberg says.

In the experiment, Lee Rozema, a researcher in Steinberg’s lab and lead author on the paper, prepared qubits in the form of photons which carried information in the form of their spin and in their path. The experiment showed that the information contained in three qubits could be compressed into only two qubits. The researchers also showed that the compression would scale exponentially. So it would require only 10 qubits to store all of the information about 1,000 qubits, and only 20 qubits to store all of the information about a million.

One caveat is that the information has to be contained in qubits that have been prepared by an identical process. However, many experiments in quantum information make use of just such identically prepared qubits, making the technique potentially very useful.

“This work sheds light on some of the striking differences between information in the classical and quantum worlds. It also promises to provide an exponential reduction in the amount of quantum memory needed for certain tasks,” Steinberg says.

“The idea grew out of a CIFAR meeting,” he says. “There was a talk by Robin Blume-Kohout (Sandia National Laboratory) at the Innsbruck meeting that first started me thinking about data compression, and then discussions with him led into this project.”

###

The paper will appear in an upcoming issue of Physical Review Letters. In addition to Rozema and Steinberg, authors include Dylan H. Mahler, CIFAR Global Scholar Alex Hayat and Peter S. Turner.

About CIFAR

CIFAR brings together extraordinary scholars and scientists from around the world to address questions of global importance. Based in Toronto, Canada, CIFAR is a global research organization comprising nearly 400 fellows, scholars and advisors from more than 100 institutions in 16 countries. The Institute helps to resolve the world’s major challenges by contributing transformative knowledge, acting as a catalyst for change, and developing a new generation of research leaders. Established in 1982, CIFAR partners with the Government of Canada, provincial governments, individuals, foundations, corporations and research institutions to extend our impact in the world.

CIFAR’s program in Quantum Information Science unites computer scientists and physicists in an effort to harness the strange and fascinating properties of the quantum world, where the mere act of observing an object changes in nature, with the aim of building quantum computers.

Read the original/reproduced from EurekAlert!

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

What will our world look like in 2022? – IEEE Computer Society

Published by in From the WWW, Snippet on September 28th, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

IEEE Computer Society

In 2013-14, nine technical leaders wrote a report, entitled IEEE CS 2022, surveying 23 innovative technologies that could change the industry by the year 2022. The reportcovers security cross-cutting issues, open intellectual property movement, sustainability, massively online open courses,quantum computing, device and nanotechnology, deviceand nanotechnology, 3D integrated circuits, multicore, photonics, universal memory, networking and interconnectivity, software-defined networks, high-performance computing, cloud computing, the Internet of Things, natural user interfaces, 3D printing, big data and analytics, machine learning and intelligent systems, computer vision and pattern recognition, life sciences, computational biology and bioinformatics, and robotics for medical care.

Read the report from IEEE CS, also embedded at the end of this post and summarized very well at Computer.org (reproduced below – vist Computer.org for original article)

Security Cross-Cutting Issues 1. Security Cross-Cutting Issues
The growth of large data repositories and emergence of data analytics have combined with intrusions by bad actors, governments, and corporations to open a Pandora’s box of issues. How can we balance security and privacy in this environment?
Open Intellectual Property Movement 2. Open Intellectual Property Movement
From open source software and standards to open-access publishing, the open IP movement is upon us. What are the implications?
Sustainability technology 3. Sustainability
Can electronic cars, LED lighting, new types of batteries and chips, and increasing use of renewables combat rising energy use and an explosion in the uptake of computing?
Massively Online Open Courses 4. Massively Online Open Courses
MOOCs have the potential to transform the higher-education landscape, siphoning students from traditional universities and altering faculty and student roles. How significant will their impact be?
Quantum Computing 5. Quantum Computing
Constrained only by the laws of physics, quantum computing will potential extend Moore’s Law into the next decade. As commercial quantum computing comes within reach, new breakthroughs are occurring at an accelerating pace.
Device and Nanotechnology 6. Device and Nanotechnology
It is clear that MEMS devices, nanoparticles, and their use in applications are here to stay. Nanotechnology has already been useful in manufacturing sunscreen, tires, and medical devices that can be swallowed.
3D Integrated Circuits 7. 3D Integrated Circuits
The transition from printed circuit boards to 3D-ICs is already underway in the mobile arena, and will eventually spread across the entire spectrum of IT products.
Universal Memory 8. Universal Memory
Universal memory replacements for DRAM will cause a tectonic shift in architectures and software.
Multicore 9. Multicore
By 2022, multicore will be everywhere, from wearable systems and smartphones to cameras, games, automobiles, cloud servers, and exa-scale supercomputers.
Photonics 10. Photonics
Silicon photonics will be a fundamental technology to address the bandwidth, latency, and energy challenges in the fabric of high-end systems.
Networking and Interconnectivity 11. Networking and Interconnectivity
Developments at all levels of the network stack will continue to drive research and the Internet economy.
Software-defined Networks 12. Software-defined Networks
OpenFlow and SDN will make networks more secure, transparent, flexible, and functional.
High-performance Computing 13. High-performance Computing
While some governments are focused on reaching exascale, some researchers are intent on moving HPC to the cloud.
Cloud Computing 14. Cloud Computing
By 2022, cloud will be more entrenched and more computing workloads run on the cloud.
The Internet of Things 15. The Internet of Things
From clothes that monitor our movements to smart homes and cities, the Internet of Things knows no bounds, except for our concerns about ensuring privacy amid such convenience.
Natural User Interfaces 16. Natural User Interfaces
The long-held dreams of computers that can interface with us through touch, gesture, and speech are finally coming true, with more radical interfaces on the horizon.
3D Printing 17. 3D Printing
3D printing promises a revolution in fabrication, with many opportunities to produce designs that would have been prohibitively expensive.
Big Data and Analytics 18. Big Data and Analytics
The growing availability of data and demand for its insights holds great potential to improve many data-driven decisions.
Machine Learning and Intelligent Systems 19. Machine Learning and Intelligent Systems
Machine learning plays an increasingly important role in our lives, whether it’s ranking search results, recommending products, or building better models of the environment.
Computer Vision and Pattern Recognition 20. Computer Vision and Pattern Recognition
Unlocking information in pictures and videos has had a major impact on consumers and more significant advances are in the pipeline.
Life Sciences 21. Life Sciences
Technology has been pivotal in improving human and animal health and addressing threats to the environment.
Computational Biology and Bioinformatics 22. Computational Biology and Bioinformatics
Vast amounts of data are enabling the improvement of human health and unraveling of the mysteries of life.
Medical Robotics 23. Medical Robotics
From autonomous delivery of hospital supplies to telemedicine and advanced prostheses, medical robotics has led to many life-saving innovations.

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

© all content copyright respective owners
CyberChimps