HP’s ‘The Machine’ & the Future of Linux – FOSS Force

Published by in From the WWW on December 15th, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

An array of 17 purpose-built oxygen-depleted titanium dioxide memristors built at HP Labs, imaged by an atomic force microscope.

If all goes according to plan, in June of 2015 HP plans to release a new operating system they’re calling Linux++. Before we start jumping up and down and putting on our party hats, we should know that this is not a new Linux distro being designed by HP to be featured on a new line of laptops. Although based on Linux and Android, this won’t even be an operating system at all in the sense that mortals such as I generally use the term. Most of us won’t be downloading and installing it. If we do, we won’t be using it as a drop-in replacement for Mint, Fedora or any of our other favorite desktop distros.

Reproduced /read the full article from FOSS Force. By Christine Hall.

Linux++ will mainly be used by developers who want to get their software projects ready for The Machine, a completely new type of computer which HP hopes to introduce to the large scale server market sometime in 2018. This computer will have such a radically new design that, in many ways, it’ll be a completely different animal from the machines we’ve been using since days when the word “computer” pretty much meant “IBM mainframe.”

So what is The Machine? Julie Bort with Business Insider on Thursday called it “a computer so radical and so powerful that it will reduce today’s data center down to the size of a refrigerator.” If it lives up to its hype, it promises to turn today’s computers into horse and buggies by comparison.

HP is developing all sorts of whiz-bang technologies to make this baby work — photonics, for instance, for super high speed data transfer. At the heart of The Machine will be a new memory technology, memristors, which HP has been developing since at least 2008. Like flash memory, memristors are nonvolatile, meaning they don’t lose the memory they’re holding when powered down. Unlike flash memory, however, they can handle over a million rewrites and are suitable to be used as a computers primary memory.

According to a 2010 article published by MIT Technology Review:

“The memristor circuits…are also capable of both memory and logic, functions that are done in separate devices in today’s computers. ‘Most of the energy used for computation today is used to move the data around’ between the hard drive and the processor, says [HP’s R. Stan] Williams. A future memristor-based device that provided both functions could save a lot of energy and help computers keep getting faster, even as silicon reaches its physical limits.”

In other words, a memristor system can store all of the data that would normally be stored on a secondary memory device, such as a hard drive, making that data instantly available to the CPU. They come with other advantages as well, including smaller size and a much reduced energy footprint, hence the refrigerator sized data center concept.

Linux++ is not the operating system that will run HP’s The Machine. According to another MIT Technology Review article, it’s only an interim step and is something of an emulator to make a conventional computer behave like The Machine:

“A working prototype of The Machine should be ready by 2016, says [The Machine’s chief architect, Kirk] Bresniker. However, he wants researchers and programmers to get familiar with how it will work well before then. His team aims to complete an operating system designed for The Machine, called Linux++, in June 2015. Software that emulates the hardware design of The Machine and other tools will be released so that programmers can test their code against the new operating system. Linux++ is intended to ultimately be replaced by an operating system designed from scratch for The Machine, which HP calls Carbon.”

It’s most likely that Carbon, The Machine’s OS, will be proprietary. It’s also likely that The Machine will be an extremely expensive piece of hardware, at least at first. However, if big server users such as Google and Facebook get behind the project, as they undoubtedly will if the technology proves to be viable, the price will rapidly fall. It’s entirely possible that within a decade this technology might be powering not only servers, but desktops, laptops and mobile devices as well.

This could turn out to be one of the biggest game changers the computing world has ever seen — bigger than the advent of the personal computer or the creation of smartphones and tablets.

Even if HP fails to get The Machine to fly, something like this is certain to be successfully developed elsewhere, and when that happens it will definitely prove to be a conundrum for free software. For starters, it’s doubtful that any any programs will work on new and radically different architectures without extensive modifications. Although we can expect well funded server projects like Apache, Docker, Hadoop and OpenStack to be ready if and when The Machine, or something like it, makes its debut, some smaller consumer oriented open source projects might not.

Right now, we don’t even know whether there will be a place for Linux when this brave new world arrives. We can assume that if HP’s bid is successful, however, Linux will be ready to go because HP’s already working on a Linux implementation, which Red Hat can take and run with.

Time will tell.

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Cloud of Clouds (Intercloud) – OpenMind

Published by in From the WWW on December 14th, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Intercloud or ’cloud of clouds’ is a term refer to a theoretical model for cloud computing services based on the idea of combining many different individual clouds into one seamless mass in terms of on-demand operations. The intercloud would simply make sure that a cloud could use resources beyond its reach, by taking advantage of pre-existing contracts with other cloud providers.

The Intercloud scenario is based on the key concept that each single cloud does not have infinite physical resources or ubiquitous geographic footprint. If a cloud saturates the computational and storage resources of its infrastructure, or is requested to use resources in a geography where it has no footprint, it would still be ableto satisfy such requests for service allocations sent from its clients.

Read the full article/reproduced from OpenMind. By Ahmed Banafa

Need expertise with Cloud / Internet Scale computing / Hadoop / Big Data / Algorithms / Architectures etc. ? Contact me - i can help – this is one of my primary expertise.

The Intercloud scenario would address such situations where each cloud would use the computational, storage, or any kind of resource (through semantic resource descriptions, and open federation) of the infrastructures of other clouds. This is analogous to the way the Internet works, in that a service provider, to which an endpoint is attached, will access or deliver traffic from/to source/destination addresses outside of its service area by using Internet routing protocols with other service providers with whom it has a pre-arranged exchange or peering relationship. It is also analogous to the way mobile operators implement roaming and inter-carrier interoperability. Such forms of cloud exchange, peering, or roaming may introduce new business opportunities among cloud providers if they manage to go beyond the theoretical framework.

IBM researchers are working on a solution that they claim can seamlessly store and move data across multiple cloud platforms in real time. The firm thinks that the technology will help enterprises with service reliability concerns. On top of this, they hope to “cloud-enable” almost any digital storage product.

Researchers at IBM have developed a “drag-and-drop” toolkit that allows users to move file storage across almost any cloud platform. The company cloud would host identity authentication and encryption technologies as well as other security systems on an external cloud platform (the ‘InterCloud Store’) to keep each cloud autonomous, while also keeping them synced together.

IBM’s Evangelos Eleftheriou explained that the cloud-of-clouds invention can help avoid service outrages due to the fact it can tolerate crashes of any number of clients. It would do this using the independence of multiple clouds linked together to increase overall reliability.

Storage services don’t communicate directly with each other but instead go through the larger cloud for authentication. Data is encrypted as it leaves one station and decrypted before it reaches the next. If one cloud happens to fail, a back-up cloud responds immediately.

The cloud-of-clouds is also intrinsically more secure: “If one provider gets hacked there is little chance they will penetrate other systems at the same time using the same vulnerability.” Alessandro Sorniotti, cloud storage scientist at IBM and one of the researchers, says. “From the client perspective, we will have the most available and secure storage system.”

HP and RedHat have also made offerings of similar kinds, Cisco will invest $1B in the next two years to build its expanded cloud business, and we expect the incremental capabilities to expand the true investment figure even further.

The Future?

Five years from now there will be a suite of international interoperability standards that will lead to a cloud of clouds, or “inter-cloud,” a future where there will be tight integration between multiple clouds. This tighter integration of clouds will have practical implications for businesses, giving analysts the ability to sift through siloes of big data applications to make better informed decisions, according to John Messina, a senior member with the National Institute of Standards and Technology’s cloud computing program. Interoperability is much broader than an organization or consumers talking with cloud providers, but also involves cloud providers communicating with one another and those providers interconnecting with other resources such as social media and sensor networks, Messina said.

NIST along with other international groups such as The Institute of Electrical and Electronics Engineers, the International Electrotechnical Commission, the International Standards Organization and the TM Forum are pushing for interoperability and portability standards. “I think there is a safe prediction that we will have much more interoperability in the future right around the three- to five-year point. Probably closer to five, we will have that cloud of cloud people are talking about,” Messina said.

Randy Garrett, program manager with the Defense Advanced Research Projects Agency’s Information Innovation Office, who also was on the panel, said, “We will see a growth in the Internet of Things,” referring to devices ranging from smart phones to automated sensors and non-computing devices connected to the Internet.

An interconnected world has potential benefits, but it also creates new risks. For example, 10 years ago there was no danger that somebody could remotely take over your car with a cyberattack. But a car today with onboard computers, a GPS receiver and wireless connections is vulnerable. Someone can take over a car. They cannot steer it (unless we are talking about Google’s driverless car), Garrett noted, but can do other things. “So when you take that possibility and spread it out, it makes you wonder what type of future world we will have if somebody can come in remotely change your heating or air conditioning or shut down your car”

Need expertise with Cloud / Internet Scale computing / Hadoop / Big Data / Algorithms / Architectures etc. ? Contact me - i can help – this is one of my primary expertise.

Still, a lot of future benefits will arise as a result of connected devices and access to more information such as the better tracking of the rise and spread of epidemics, a larger sampling of medicines or the ability to detect manufacturing defects, Garrett said.

Read the full article/reproduced from OpenMind .By Ahmed Banafa

References

http://www.cloudcomputing-news.net/news/2013/dec/10/ibm-launches-cloud-clouds-offering-aims-stop-vendor-lockin/

http://www.themetisfiles.com/wp-content/uploads/2013/05/Segmenting-The-InterCloud.png

http://blogs.cisco.com/wp-content/uploads/Cloudservices-550×337.png

https://images.computerwoche.de/images/computerwoche/bdb/1872947/522×294.jpg

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Quantum Code-Cracking: An Interview with Thomas Vidick – Caltech

Published by in From the WWW on December 14th, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Thomas Vidick, Assistant Professor of Computing and Mathematical Sciences Credit: Lance Hayashida/Caltech Marketing and Communications

Quantum computers, looked to as the next generation of computing technology, are expected to one day vastly outperform conventional computers. Using the laws of quantum mechanics—the physics that governs the behavior of matter and light at the atomic and subatomic scales—these computers will allow us to move and analyze enormous amounts of information with unprecedented speed. Although engineers have yet to actually build such a machine, Assistant Professor of Computing and Mathematical Sciences Thomas Vidick is figuring out how some of the principles of quantum computing can be applied right now, using today’s technology.

Originally from Belgium, Vidick received his BS from École Normale Supérieure in Paris in 2007 and his master’s degree from Université Paris Diderot, also in 2007. He earned his doctorate from UC Berkeley in 2011. Vidick joined the Division of Engineering and Applied Science at Caltech in June from MIT, where he was a postdoctoral associate.

This fall, he spoke with us about quantum methods for encrypting information, what he’s looking forward to at Caltech, and his ongoing search for the best croissants in Los Angeles.

Written by Jessica Stoller-Conrad. Read the original/reproduced from Caltech

What are your research interests?

My area is quantum computing, so it’s the computer science of quantum physics. Classical computers—like the computer on my desk—work based on the laws of classical mechanics. They just manipulate bits and do various operations. However, in the 1970s people started to wonder what kinds of computational processes could be realized using quantum-mechanical systems. They ended up discovering algorithms that in some cases can be more efficient, or that can implement certain tasks that were not possible with classical computers.

In my research, I look at two things. One, what are the kinds of procedures that you can implement more efficiently using quantum computers? And, two, what kinds of cryptographic systems—ways to encrypt information securely—can you come up with using quantum systems that could be more secure than classical systems? It’s all about this exploration of what quantum systems allow us to do that classical systems didn’t or wouldn’t.

Quantum computers haven’t been invented yet, so how do you do this work?

That’s a good question, and there are several different answers. Some of my research is very theoretical, and it’s just about saying, “If we had a quantum computer, what could we do with it?” We don’t have a quantum computer yet because it’s very hard to manipulate and control quantum systems on a large scale. But that is just an engineering problem, so what people say is that yes it’s very hard, but in 10 years, we’ll get to it. And the theory is also very hard, so we might as well get started right now.

That’s one answer. But the better answer is that a lot of what I do and a lot of what I’m interested in doesn’t require or depend on whether we can actually build a quantum computer or not. For instance, the cryptographic aspects of quantum computing are already being implemented. There are start-ups that already sell quantum cryptographic systems on the Internet because these systems only require the manipulation of very-small-scale quantum systems.

We can also do some computations about properties of quantum-mechanical systems on a classical computer. One branch of my research has to do with how you can come up with classical algorithms for computing the properties of systems that are described by the laws of quantum mechanics. The most natural way to understand the systems would be to have a quantum computer and then use the quantum computer to simulate the evolution of the quantum-mechanical system. Since we don’t have a quantum computer, we have to develop these algorithms using a classical computer and our understanding of the quantum-mechanical system.

Can you give a real-world example of how this work might affect the ways in which an average person uses a computer in the future?

One of the most basic ways that quantum cryptographic tasks are used is to come up with a secret key or passcode to encrypt communication. For instance, the two of us, we trust one another, but we’re far away from each other. We want to come up with a secret key—just some sort of passcode that we’re going to use to encrypt our communication later. I could dream up the passcode and then tell it to you over the phone, but if someone listens to the line, it’s not secure. There might constantly be someone listening in on the line, so there is no passcode procedure to exchange secret keys between us, unless we meet up in person.

However, it is known that if we are able to send quantum messages, then actually we could do it. How this works is that, instead of sending you a passcode of my choice, I would send you a bunch of photons, which are quantum particles, prepared in a completely random state. There is then a whole quantum protocol in which you need to measure the photons, but the main point is that at the end, we’ll each be able to extract the exact same passcode: me from the way the photons were prepared, and you from the measurement results. The code will be random, but we’ll both know it.

And because of the laws of quantum mechanics, if anyone has been listening on the line—intercepting the photons—we’ll be able to tell. The reason for this is that any disturbance of the photon’s quantum-mechanical states can be detected from the correlations between the outcomes of the measurements and the initial state of the photons. This is called an “entropy-disturbance tradeoff”—if the eavesdropper perturbs the photons then the outcome distribution you observe is affected in a way that can be checked. This is a uniquely quantum phenomenon, and it allows distant parties to establish a secret key or a passcode between them in a perfectly secure way.

How does your work address this?

This system of sending quantum messages was discovered in the ’80s, and, as I said before, people are already implementing it. But there is one big drawback to quantum cryptography, and that’s that you need quantum equipment to do it—and this quantum equipment tends to be really clunky. It’s very hard to come up with a machine that sends photons one by one, and since single photons can be easily lost, it’s also hard to make accurate measurements. Also, you need a machine that can generate single photons and a machine that can detect single photons for the message to be secure.

In practice, we don’t have such machines. We have these huge clunky machines that can sort of do it, but they’re never perfect. My work tries to bypass the need for these machines, with cryptographic protocols and proofs of security that are secure even if you can’t make or see the quantum part of the protocol. To do this, we model the quantum equipment just as a black box. So my work has been to try to get these strong proofs of security into a model where we only really trust the interactions we can see in the classical world. It’s a proof of security that holds independently of whether the quantum part of the device works in the way that we think it does.

How did you get into this field?

I was doing math originally. I was doing number theory as an undergrad and I liked it a lot. But then I did an internship, and I realized that I couldn’t tell anyone why I was asking the questions I was asking. So I thought, “I need a break from this. Whatever I do for my life, I need to know why I’m doing it.” The best alternative I could think of was computer science, because it seemed more concrete. And this was when I learned that quantum computing existed—I didn’t know before. I think what’s most interesting about it is that you’re talking about the world—because the world is quantum mechanical. Physics describes the world.

That’s what I really like, because from my point of view everything I do is still very theoretical work and I like doing theoretical work. I like the beauty of it. I like the abstractness of it. I like that you have well-posed problems and you can give well-defined answers. But I also like the fact that in the end you are talking about or trying to talk about real-world physics. So every time I think “Why am I doing this?” or “What should I do?” I try to think of how I can connect it to a real, concrete question.

How did you get interested in math and computer science when you were a kid?

My dad was a chemist but he worked as an engineer, and he would come home from work and would bring home different experiments with liquid nitrogen or whatever.

I guess he gave me sort of a scientific mind, but then why did I do math problems? Probably like most people good at math, I was just good at it for some reason and it was just easy. Math is so beautiful when you understand it. Throughout middle school and high school, I just enjoyed it so much. But then, as I said, eventually I stretched my limits in math a little bit.

What are you excited about in terms of coming to Caltech?

I really like the Computing and Mathematical Sciences department here—it’s a young department and it’s a small department. For me it’s very unique in that there’s a very strong group in quantum information—especially the physics side of quantum information, like my neighbor here, John Preskill. Caltech has a very strong group in quantum information and also has a very strong group in computer science. And so, from the point of view of my research, this is just the perfect place.

And then there are the mountains. I love the mountains—they’re just beautiful. This is how I abandoned the smoky Paris cafes. I had to think about the mountains. You can’t beat the view from my office, and I can go hike up there.

Other than hiking, do you have any hobbies or interests that are outside your research?

I also like to bike up mountains. I did that a lot when I came here, but then I fractured my collarbone while biking. It’s almost better now, but I still haven’t gotten back on the bike yet. Another thing that is an investment of time—and I’m really worried about that one—is croissant hunting. I really like croissants and chocolates. I’m from Belgium, and Belgium is pretty big on chocolate. I’ve already been to a lot of famous croissant and chocolate places in L.A., but I haven’t found something that has lived up to my standards yet. I haven’t done everything though, so I’m open to recommendations.

Written by Jessica Stoller-Conrad. Read the original/reproduced from Caltech

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

More-flexible digital communication – MIT News

Published by in From the WWW on December 14th, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

godwincaruana.me

MIT News

A New theory could yield more-reliable communication protocols. Communication protocols for digital devices are very efficient but also very brittle: They require information to be specified in a precise order with a precise number of bits. If sender and receiver — say, a computer and a printer — are off by even a single bit relative to each other, communication between them breaks down entirely.

Humans are much more flexible. Two strangers may come to a conversation with wildly differing vocabularies and frames of reference, but they will quickly assess the extent of their mutual understanding and tailor their speech accordingly.

Reproduced from/read the original at MIT News. By Larry Hardesty. December 12, 2014

Madhu Sudan, an adjunct professor of electrical engineering and computer science at MIT and a principal researcher at Microsoft Research New England, wants to bring that type of flexibility to computer communication. In a series of recent papers, he and his colleagues have begun to describe theoretical limits on the degree of imprecision that communicating computers can tolerate, with very real implications for the design of communication protocols.

“Our goal is not to understand how human communication works,” Sudan says. “Most of the work is really in trying to abstract, ‘What is the kind of problem that human communication tends to solve nicely, [and] designed communication doesn’t?’ — and let’s now see if we can come up with designed communication schemes that do the same thing.”

One thing that humans do well is gauging the minimum amount of information they need to convey in order to get a point across. Depending on the circumstances, for instance, one co-worker might ask another, “Who was that guy?”; “Who was that guy in your office?”; “Who was that guy in your office this morning?”; or “Who was that guy in your office this morning with the red tie and glasses?”

Similarly, the first topic Sudan and his colleagues began investigating is compression, or the minimum number of bits that one device would need to send another in order to convey all the information in a data file.

Uneven odds

In a paper presented in 2011, at the ACM Symposium on Innovations in Computer Science (now known as Innovations in Theoretical Computer Science, or ITCS), Sudan and colleagues at Harvard University, Microsoft, and the University of Pennsylvania considered a hypothetical case in which the devices shared an almost infinite codebook that assigned a random string of symbols — a kind of serial number — to every possible message that either might send.

Of course, such a codebook is entirely implausible, but it allowed the researchers to get a statistical handle on the problem of compression. Indeed, it’s an extension of one of the concepts that longtime MIT professor Claude Shannon used to determine the maximum capacity of a communication channel in the seminal 1948 paper that created the field of information theory.

In Sudan and his colleagues’ codebook, a vast number of messages might have associated strings that begin with the same symbol. But fewer messages will have strings that share their first two symbols, fewer still strings that share their first three symbols, and so on. In any given instance of communication, the question is how many symbols of the string one device needs to send the other in order to pick out a single associated message.

The answer to that question depends on the probability that any given interpretation of a string of symbols makes sense in context. By way of analogy, if your co-worker has had only one visitor all day, asking her, “Who was that guy in your office?” probably suffices. If she’s had a string of visitors, you may need to specify time of day and tie color.

Existing compression schemes do, in fact, exploit statistical regularities in data. But Sudan and his colleagues considered the case in which sender and receiver assign different probabilities to different interpretations. They were able to show that, so long as protocol designers can make reasonable assumptions about the ranges within which the probabilities might fall, good compression is still possible.

For instance, Sudan says, consider a telescope in deep-space orbit. The telescope’s designers might assume that 90 percent of what it sees will be blackness, and they can use that assumption to compress the image data it sends back to Earth. With existing protocols, anyone attempting to interpret the telescope’s transmissions would need to know the precise figure — 90 percent — that the compression scheme uses. But Sudan and his colleagues showed that the protocol could be designed to accommodate a range of assumptions — from, say, 85 percent to 95 percent — that might be just as reasonable as 90 percent.

Buggy codebook
Reproduced from/read the original at MIT News. By Larry Hardesty. December 12, 2014
In a paper being presented at the next ITCS, in January, Sudan and colleagues at Columbia University, Carnegie Mellon University, and Microsoft add even more uncertainty to their compression model. In the new paper, not only do sender and receiver have somewhat different probability estimates, but they also have slightly different codebooks. Again, the researchers were able to devise a protocol that would still provide good compression.

They also generalized their model to new contexts. For instance, Sudan says, in the era of cloud computing, data is constantly being duplicated on servers scattered across the Internet, and data-management systems need to ensure that the copies are kept up to date. One way to do that efficiently is by performing “checksums,” or adding up a bunch of bits at corresponding locations in the original and the copy and making sure the results match.

That method, however, works only if the servers know in advance which bits to add up — and if they store the files in such a way that data locations correspond perfectly. Sudan and his colleagues’ protocol could provide a way for servers using different file-management schemes to generate consistency checks on the fly.

“I shouldn’t tell you if the number of 1’s that I see in this subset is odd or even,” Sudan says. “I should send you some coarse information saying 90 percent of the bits in this set are 1’s. And you say, ‘Well, I see 89 percent,’ but that’s closReproduced from/read the original at MIT News. By Larry Hardesty. December 12, 2014e to 90 percent — that’s actually a good protocol. We prove this.”

“This sequence of works puts forward a general theory of goal-oriented communication, where the focus is not on the raw data being communicated but rather on its meaning,” says Oded Goldreich, a professor of computer science at the Weizmann Institute of Science in Israel. “I consider this sequence a work of fundamental nature.”

“Following a dominant approach in 20th-century philosophy, the work associates the meaning of communication with the goal achieved by it and provides a mathematical framework for discussing all these natural notions,” he adds. “This framework is based on a general definition of the notion of a goal and leads to a problem that is complementary to the problem of reliable communication considered by Shannon, which established information theory.”

Reproduced from/read the original at MIT News. By Larry Hardesty. December 12, 2014

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Will Tomorrow’s Supercomputers Be Superconducting? – IEEE Spectrum

Published by in From the WWW on December 13th, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Illustration: Randi Klett; Images: Getty Images

Today, the list of the 500 fastest supercomputers is dominated by computers based on semiconducting circuitry. Ten years from now, will superconducting computers start to take some of those slots? Last week, IARPA, the U.S. intelligence community’s high-risk research arm, announced that it had awarded its first set of research contracts in a multi-year effort to develop a superconducting computer. The program, called Cryogenic Computing Complexity (C3), is designed to develop the components needed to construct such a computer as well as a working prototype.

By Rachel Courtland:. Posted 11 Dec 2014 | 22:00 GMT. Read the original/reproduced from IEEE Spectrum

If the program succeeds, it could potentially be a big boon to the makers of supercomputers. The ubiquitous CMOS-based technology we use to make those systems is proving difficult to scale up without consuming staggering amounts of power.

Superconducting circuitry, which boasts resistance-less wires and hyper-fast switches, could potentially be a faster and more efficient alternative – even when you take into account the fact that it will require cryocoolers to take the temperature down to a few degrees above absolute zero.

The idea of superconducting computing actually extends all the way back to the dawn of the computer age. One of the early candidates for digital logic was a superconducting switch called a cryotron, developed in the 1950’s by engineer Dudley Buck.

This time around, the leading logic candidate is likely to be a form of single-flux quantum (SFQ) circuitry. SFQ logic is based on flow: bits stream through the circuits as voltage pulses, which are blocked or passed by superconducting devices called Josephson junctions. A bit is 0 or 1 depending on whether a pulse is present or not during a given period of time.

I wrote about this form of logic a few years back, when a team at Northrop Grumman reported a new, lower-power incarnation of the technology. In fact, Northrop Grumman is one of the awardees, along with IBM and Raytheon-BBN, in the first phase of IARPA’s C3 project.

This first phase, according to program documents (pdf) released last year, will focus on demonstrating critical superconducting computing components. Two projects will focus on logic and two on memory, C3 program manager Marc Manheimer recently told HPCwire. In phase two, the components will be combined to create a working computer.

Read the original/reproduced from IEEE Spectrum

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Computers that teach by example – MIT News

Published by in Snippet on December 7th, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Julie Shah (left) and Been Kim Photo: Jose-Luis Olivares/MIT

New system enables pattern-recognition systems to convey what they learn to humans. Computers are good at identifying patterns in huge data sets. Humans, by contrast, are good at inferring patterns from just a few examples.

In a paper appearing at the Neural Information Processing Society’s conference next week, MIT researchers present a new system that bridges these two ways of processing information, so that humans and computers can collaborate to make better decisions. The system learns to make judgments by crunching data but distills what it learns into simple examples. In experiments, human subjects using the system were more than 20 percent better at classification tasks than those using a similar system based on existing algorithms.

Larry Hardesty | MIT News Office. December 5, 2014. Read the full article from MIT News

Need expertise with Cloud / Internet Scale computing / Hadoop / Machine Learning / Big Data / Algorithms / Architectures etc. ? Contact me - i can help – this is one of my primary expertise.

“In this work, we were looking at whether we could augment a machine-learning technique so that it supported people in performing recognition-primed decision-making,” says Julie Shah, an assistant professor of aeronautics and astronautics at MIT and a co-author on the new paper. “That’s the type of decision-making people do when they make tactical decisions — like in fire crews or field operations. When they’re presented with a new scenario, they don’t do search the way machines do. They try to match their current scenario with examples from their previous experience, and then they think, ‘OK, that worked in a previous scenario,’ and they adapt it to the new scenario.”

In particular, Shah and her colleagues — her student Been Kim, whose PhD thesis is the basis of the new paper, and Cynthia Rudin, an associate professor of statistics at the MIT Sloan School of Management — were trying to augment a type of machine learning known as “unsupervised.”

In supervised machine learning, a computer is fed a slew of training data that’s been labeled by humans and tries to find correlations — say, those visual features that occur most frequently in images labeled “car.” In unsupervised machine learning, on the other hand, the computer simply looks for commonalities in unstructured data. The result is a set of data clusters whose members are in some way related, but it may not be obvious how.

Balancing act

The most common example of unsupervised machine learning is what’s known as topic modeling, in which a system clusters documents together according to their most characteristic words. Since the data is unlabeled, the system can’t actually deduce the topics of the documents. But a human reviewing its output would conclude that, for instance, the documents typified by the words “jurisprudence” and “appellate” are legal documents, while those typified by “tonality” and “harmony” are music-theory papers.

The MIT researchers made two major modifications to the type of algorithm commonly used in unsupervised learning. The first is that the clustering was based not only on data items’ shared features, but also on their similarity to some representative example, which the researchers dubbed a “prototype.”

The other is that rather than simply ranking shared features according to importance, the way a topic-modeling algorithm might, the new algorithm tries to winnow the list of features down to a representative set, which the researchers dubbed a “subspace.” To that end, the algorithm imposes a penalty on subspaces that grow too large. So when it’s creating its data clusters, it has to balance three sometimes-competing objectives: similarity to prototype, subspace size, and clear demarcations between clusters.

“You have to pick a good prototype to describe a good subspace,” Kim explains. “At the same time, you have to pick the right subspace such that the prototype makes sense. So you’re doing it all simultaneously.”

The researchers’ first step was to test their new algorithm on a few classic machine-learning tasks, to make sure that the added constraints didn’t impair its performance. They found that on most tasks, it performed as well as its precursor, and on a few, it actually performed better. Shah believes that that could be because the prototype constraint prevents the algorithm from assembling feature lists that contain internal contradictions.

Suppose, for instance, that an unsupervised-learning algorithm was trying to characterize voters in a population. A plurality of the voters might be registered as Democrats, but a plurality of Republicans may have voted in the last primary. The conventional algorithm might then describe the typical voter as a registered Democrat who voted in the last Republican primary. The prototype constraint makes that kind of result very unlikely, since no single voter would match its characterization.

Road test

Next, the researchers conducted a set of experiments to determine whether prototype-based machine learning could actually improve human decision-making. Kim culled a set of recipes from an online database in which they had already been assigned categories — such as chili, pasta, and brownies — and distilled them to just their ingredient lists. Then she fed the lists to both a conventional topic-modeling algorithm and the new, prototype-constrained algorithm.

For each category, the new algorithm found a representative example, while the conventional algorithm produced a list of commonly occurring ingredients. Twenty-four subjects were then given 16 new ingredient lists each. Some of the lists were generated by the new algorithm and some by the conventional algorithm, and the assignment was random. With lists produced by the new algorithm, subjects were successful 86 percent of the time, while with lists produced by the conventional algorithm, they were successful 71 percent of the time.

“I think this is a great idea that models the machine learning and the interface with users appropriately,” says Ashutosh Saxena, an assistant professor of computer science at Cornell University. Saxena leads a research project called Robo Brain, which uses machine learning to comb the Internet and model the type of common-sense associations that a robot would need to navigate its environment.

“In Robo Brain, the machine-learning algorithm is trying to learn something, and it may not be able to do things properly, so it has to show what it has learned to the users to get some feedback so that it can improve its learning,” Saxena says. “We would be very interested in using such a technique to show the output of Robo Brain project to users.”

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

11 open source tools to make the most of machine learning – IT World

Published by in From the WWW on December 6th, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Tap the predictive power of machine learning with these diverse, easy-to-implement libraries and frameworks. 11 open source tools for making the most of machine learning. Spam filtering, face recognition, recommendation engines — when you have a large data set on which you’d like to perform predictive analysis or pattern recognition, machine learning is the way to go. This science, in which computers are trained to learn from, analyze, and act on data without being explicitly programmed, has surged in interest of late outside of its original cloister of academic and high-end programming circles. Reproduced from/Read the original at IT World

Need expertise with Cloud / Internet Scale computing / Hadoop / Machine Learning / Big Data / Algorithms / Architectures etc. ? Contact me - i can help – this is one of my primary expertise.

This rise in popularity is due not only to hardware growing cheaper and more powerful, but also the proliferation of free software that makes machine learning easier to implement both on single machines and at scale. The diversity of machine learning libraries means there’s likely to be an option available regardless of what language or environment you prefer.  These 11 machine learning tools provide functionality for individual apps or whole frameworks, such as Hadoop. Some are more polyglot than others: Scikit, for instance, is exclusively for Python, while Shogun sports interfaces to many languages, from general-purpose to domain-specific.

Scikit-learn

Python has become a go-to programming language for math, science, and statistics due to its ease of adoption and the breadth of libraries available for nearly any application. Scikit-learn leverages this breadth by building on top of several existing Python packages — NumPy, SciPy, and matplotlib — for math and science work. The resulting libraries can be used either for interactive “workbench” applications or be embedded into other software and reused. The kit is available under a BSD license, so it’s fully open and reusable.

Project: scikit-learn
GitHub:
https://github.com/scikit-learn/scikit-learn

 

Shogun

Among the oldest, most venerable of machine learning libraries, Shogun was created in 1999 and written in C++, but isn’t limited to working in C++. Thanks to the SWIG library, Shogun can be used transparently in such languages and environments: as Java, Python, C#, Ruby, R, Lua, Octave, and Matlab.

Though venerable, Shogun has competition. Another C++-based machine learning library, Mlpack, has been around only since 2011, although it professes to be faster and easier to work with (by way of a more integral API set) than competing libraries.

Project: Shogun
GitHub: https://github.com/shogun-toolbox/shogun

 

Accord Framework/AForge.net

Accord, a machine learning and signal processing framework for .Net, is an extension of a previous project in the same vein, AForge.net. “Signal processing,” by the way, refers here to a range of machine learning algorithms for images and audio, such as for seamlessly stitching together images or performing face detection. A set of algorithms for vision processing are included; it operates on image streams (such as video) and can be used to implement such functions as the tracking of moving objects. Accord also includes libraries that provide a more conventional gamut of machine learning functions, from neural networks to decision-tree systems.

Project: Accord Framework/AForge.net
GitHub: https://github.com/accord-net/framework/

 

Mahout

The Mahout framework has long been tied to Hadoop, but many of the algorithms under its umbrella can also run as-is outside Hadoop. They’re useful for stand-alone applications that might eventually be migrated into Hadoop or for Hadoop projects that could be spun off into their own stand-alone applications.

One downside of Mahout: Few of its algorithms currently support the high-performance Spark framework for Hadoop, and instead use the legacy (and in increasingly obsolete) MapReduce framework. The project no longer accepts MapReduce-based algorithms, but those looking for a more performant and future-proof library want to look into MLlib instead.

Project: Mahout

 

MLlib

Apache’s own machine learning library for Spark and Hadoop, MLlib boasts a gamut of common algorithms and useful data types, designed to run at speed and scale. As you’d expect with any Hadoop project, Java is the primary language for working in MLlib, but Python users can connect MLlib with the NumPy library (also used in scikit-learn), and Scala users can write code against MLlib. If setting up a Hadoop cluster is impractical, MLlib can be deployed on top of Spark without Hadoop — and in EC2 or on Mesos.

Another project, MLbase, builds on top of MLlib to make it easier to derive results. Rather than write code, users make queries by way of a declarative language à la SQL.

Project: MLlib

 

H2O

0xdata’s H2O’s algorithms are geared for business processes — fraud or trend predictions, for instance — rather than, say, image analysis. H2O can interact in a stand-alone fashion with HDFS stores, on top of YARN, in MapReduce, or directly in an Amazon EC2 instance. Hadoop mavens can use Java to interact with H2O, but the framework also provides bindings for Python, R, and Scala, providing cross-interaction with all the libraries available on those platforms as well.

Project: H20
GitHub: https://github.com/0xdata/h2o

 

Cloudera Oryx

Yet another machine learning project designed for Hadoop, Oryx comes courtesy of the creators of the Cloudera Hadoop distribution. The name on the label isn’t the only detail that sets Oryx apart: Per Cloudera’s emphasis on analyzing live streaming data by way of the Spark project, Oryx is designed to allow machine learning models to be deployed on real-time streamed data, enabling projects like real-time spam filters or recommendation engines.

An all-new version of the project, tentatively titled Oryx 2, is in the works. It uses Apache projects like Spark and Kafka for better performance, and its components are built along more loosely coupled lines for further future-proofing.

Project: Cloudera Oryx
GitHub:
https://github.com/cloudera/oryx

 

GoLearn

Google’s Go language has been in the wild for only five years, but has started to enjoy wider use, due to a growing collection of libraries. GoLearn was created to address the lack of an all-in-one machine learning library for Go; the goal is “simplicity paired with customizability,” according to developer Stephen Witworth. The simplicity comes from the way data is loaded and handled in the library, since it’s patterned after SciPy and R. The customizability lies in both the library’s open source nature (it’s MIT-licensed) and in how some of the data structures can be easily extended in an application. Witworth has also created a Go wrapper for the Vowpal Wabbit library, one of the libraries found in the Shogun toolbox.

Project: GoLearn
GitHub:
https://github.com/sjwhitworth/golearn

 

Weka

Weka, a product of the University of Waikato, New Zealand, collects a set of Java machine learning algorithms engineered specifically for data mining. This GNU GPLv3-licensed collection has a package system to extend its functionality, with both official and unofficial packages available. Weka even comes with a book to explain both the software and the techniques used, so those looking to get a leg up on both the concepts and the software may want to start there.

While Weka isn’t aimed specifically at Hadoop users, it can be used with Hadoop thanks to a set of wrappers produced for the most recent versions of Weka. Note that it doesn’t yet support Spark, only MapReduc. Clojure users can also leverage Weka, thanks to the Clj-ml library.

Project: Weka

 

CUDA-Convnet

By now most everyone knows how GPUs can crunch certain problems faster than CPUs. But applications don’t automatically take advantage of GPU acceleration; they have to be specifically written to do so. CUDA-Convnet is a machine learning library for neural-network applications, written in C++ to exploit the Nvidia’s CUDA GPU processing technology (CUDA boards of at least the Fermi generation are required). For those using Python rather than C++, the resulting neural nets can be saved as Python pickled objects and thus accessed from Python.

Note that original version of the project is no longer being developed, but has since been reworked into a successor, CUDA-Convnet2, with support for multiple GPUs and Kepler-generation GPUs. A similar project, Vulpes, has been written in F# and works with the .Net framework generally.

Project: CUDA-Convnet

 

ConvNetJS

As the name implies, ConvNetJS provides neural network machine learning libraries for use in JavaScript, facilitating use of the browser as a data workbench. An NPM version is also available for those using Node.js, and the library is designed to make proper use of JavaScript’s asynchronicity — for example, training operations can be given a callback to execute once they complete. Plenty of demo examples are included, too.

Project: ConvNetJS
GitHub:
https://github.com/karpathy/convnetjs

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

IBM Watson Analytics now open for business – ComputerWorld

Published by in From the WWW on December 6th, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Credit: Thinkstock

IBM Watson, apparently not content with its Jeopardy winnings, is looking for work. After a lot of buildup, Watson Analytics, the natural language business intelligence tool based on Big Blue’s famed AI, is now available in beta under a freemium model where it’s free to get started — but the really powerful analytics are going to cost you.  By

That “natural language” part is the key to how IBM sees Watson Analytics differentiating itself in a crowded cloud-delivered BI market defined by startups and big players like Birst, Anaplan, Tidemark, Salesforce Wave and Microsoft. Ask Watson Analytics a question like “Which deals are likely to close?” or “What employees are likely to leave the company?” and it pops up some shiny visualizations with graphs and charts and other good tidbits of data. It’s based on the same language processing that let Watson understand Alec Trebek on Jeopardy in the first place.

Reproduced from/Read the original at ComputerWorld

That simple approach makes it easier for anybody in an organization — not just a data scientist or an IT guy — to ask questions and get answers in real time. It’s supposed to take the pain out of processing complex data, making predictions, testing assumptions and telling stories without a lot of hassle. Because it’s web-based, it works on just about any device with a browser — making it handy for field workers.

Need expertise with Cloud / Internet Scale computing / Hadoop / Big Data / Machine Learning / Algorithms / Architectures etc. ? Contact me – i can help – these are primary expertise areas.

As for data ingestion, it can take in data from basically anywhere, with Salesforce, Google Drive, Box, Oracle and, of course, IBM connectors already available. As with any analytics cloud, the more data you put in, the more data points Watson Analytics has to draw from and the more valuable the insights — at least, in theory. Prominent statisticians like Nate Silver have been warning against data overload, under the precept that not all data is created equal.

Watson

In short, beware of more signal than noise.

BM’s moving into a crowded market that’s only getting more so. Everybody wants to be the vendor that provides people outside the IT organization with better visibility into their own data, but differentiation is a tremendous challenge. IBM has a big advantage with a recognizable brand — and Watson definitely makes a better company mascot than HAL 9000.

It’s free to get started, but this is your obligatory reminder that it’s still in beta, and things that are free today may be paid tomorrow. IBM likely has Watson Analytics in beta for the same reason Google kept Gmail in beta for so many years: It wants the freedom to change up the platform, even though it’s definitely ready for prime time.

IBM claims that 22,000 people are already signed up on the platform. And if you give Watson Analytics a shot, let us know in the comments whether Big Blue’s artificial intelligence has what it takes to compete in business intelligence.

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Alan Turing Institute for Data Science to be based at British Library – The Guardian

Published by in Snippet on December 6th, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

A new world-class research institute for big data will be dedicated to second world war codebreaker Alan Turing. Photograph: Sherborne School/AFP/Getty Images

The new £42m Alan Turing Institute for Data Science, dedicated to the second world war Enigma codebreaker, is to be based at the British Library at the centre of a new Knowledge Quarter, George Osborne announced on Thursday. The location of the world-class research institute, which will focus on the rapidly moving and globally competitive area of collecting and analysing so-called “big data”, was announced as the chancellor launched the Knowledge Quarter, a partnership of 35 academic, cultural, research, scientific and media organisations in and around the King’s Cross and Euston area of the capital.

Need expertise with Cloud / Internet Scale computing / Hadoop / Big Data / Algorithms / Architectures etc. ? Contact me - i can help – this is one of my primary expertise.

First announced at this year’s budget, the institute will have links with universities across the UK. It will be at the heart of the new Knowledge Quarter, which encompasses knowledge resources and expertise that range from the world’s earliest books and manuscripts to the latest fashion designs and the cutting edge of medical research. The one-mile radius around King’s Cross now forms one of the greatest concentrations of knowledge-based institutions in the world, said Osborne. “30 or 40 years ago, this was an area of London in decline, and when you look at all the exciting things happening here – the British Library, the renovation of King’s Cross, the arrival of great companies like Google and Guardian Media Group, the universities based here – it is really one of the most exciting places in the world. And it returns this area to what it was 100 or 150 years ago, when it was a centre of modern communication and modern learning,” he said.

Partners include the British Library, Google, the Wellcome Trust, Camden Council, the British Museum, Central Saint Martins, University College London, the Francis Crick Institute, the Royal College of Physicians as well as the Guardian, and its members employ more than 30,000 people, turnover more than £2bn and serve more than 8 million visitors annually.

More . . .

Read the full/original article from the The Guardian

 

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Tags: ,

How Google “Translates” Pictures Into Words Using Vector Space Mathematics – MIT

Published by in Snippet on December 3rd, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Google engineers have trained a machine learning algorithm to write picture captions using the same techniques it developed for language translation.

http://www.technologyreview.com/sites/default/files/styles/view_body_embed/public/images/Google%20caption.png?itok=P3V4gSlO

Translating one language into another has always been a difficult task. But in recent years, Google has transformed this process by developing machine translation algorithms that change the nature of cross cultural communications through Google Translate.

Now that company is using the same machine learning technique to translate pictures into words. The result is a system that automatically generates picture captions that accurately describe the content of images. That’s something that will be useful for search engines, for automated publishing and for helping the visually impaired navigate the web and, indeed, the wider world.

The conventional approach to language translation is an iterative process that starts by translating words individually and then reordering the words and phrases to improve the translation. But in recent years, Google has worked out how to use its massive search database to translate text in an entirely different way.

The approach is essentially to count how often words appear next to, or close to, other words and then define them in an abstract vector space in relation to each other. This allows every word to be represented by a vector in this space and sentences to be represented by combinations of vectors.

Read the original/Reproduced from MIT Technology Review

Google goes on to make an important assumption. This is that specific words have the same relationship to each other regardless of the language. For example, the vector “king – man + woman = queen” should hold true in all languages.

That makes language translation a problem of vector space mathematics. Google Translate approaches it by turning a sentence into a vector and then using that vector to generate the equivalent sentence in another language.

Now Oriol Vinyals and pals at Google are using a similar approach to translate images into words. Their technique is to use a neural network to study a dataset of 100,000 images and their captions and so learn how to classify the content of images.

But instead of producing a set of words that describe the image, their algorithm produces a vector that represents the relationship between the words. This vector can then be plugged into Google’s existing translation algorithm to produce a caption in English, or indeed in any other language. In effect, Google’s machine learning approach has learnt to “translate” images into words.

To test the efficacy of this approach, they used human evaluators recruited from Amazon’s Mechanical Turk to rate captions generated automatically in this way along with those generated by other automated approaches and by humans.

The results show that the new system, which Google calls Neural Image Caption, fares well. Using a well known dataset of images called PASCAL, Neural image Capture clearly outperformed other automated approaches. “NIC yielded a BLEU score of 59, to be compared to the current state-of-the-art of 25, while human performance reaches 69,” says Vinyals and co.

That’s not bad and the approach looks set to get better as the size of the training datasets increases. “It is clear from these experiments that, as the size of the available datasets for image description increases, so will the performance of approaches like NIC,” say the Google team.

Clearly, this is yet another task for which the days of human supremacy over machines are numbered.

Ref: arxiv.org/abs/1411.4555  Show and Tell: A Neural Image Caption Generator

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

© all content copyright respective owners
CyberChimps