HoloLens – The era of holographic computing is here

Published by in Snippet on January 25th, 2015 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

You put on Microsoft’s “HoloLens,” which are goggles with a glass lens in front of them. Through the goggles you see images in front of you that you can manipulate. This is what the HoloLens looks like on people. ” – Business Insider.

This is what the HoloLens looks like on people.

Read the article at/Reproduced from BusinessInsider

Read about the technology at Microsoft.com

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

NASA, Microsoft Collaboration Will Allow Scientists to ‘Work on Mars’ – NASA

Published by in From the WWW on January 24th, 2015 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

New NASA software called OnSight will use holographic computing to overlay visual information and data from the agency’s Mars Curiosity Rover into the user’s field of view. Holographic computing blends a view of the physical world with computer-generated imagery to create a hybrid of real and virtual. Image Credit: NASA

NASA and Microsoft have teamed up to develop software called OnSight, a new technology that will enable scientists to work virtually on Mars using wearable technology called Microsoft HoloLens. Developed by NASA’s Jet Propulsion Laboratory (JPL) in Pasadena, California, OnSight will give scientists a means to plan and, along with the Mars Curiosity rover, conduct science operations on the Red Planet.

“OnSight gives our rover scientists the ability to walk around and explore Mars right from their offices,” said Dave Lavery, program executive for the Mars Science Laboratory mission at NASA Headquarters in Washington. “It fundamentally changes our perception of Mars, and how we understand the Mars environment surrounding the rover.”

Read the original/reproduced from NASA

OnSight will use real rover data and extend the Curiosity mission’s existing planning tools by creating a 3-D simulation of the Martian environment where scientists around the world can meet. Program scientists will be able to examine the rover’s worksite from a first-person perspective, plan new activities and preview the results of their work firsthand.

“We believe OnSight will enhance the ways in which we explore Mars and share that journey of exploration with the world,” said Jeff Norris, JPL’s OnSight project manager.

Until now, rover operations required scientists to examine Mars imagery on a computer screen, and make inferences about what they are seeing. But images, even 3-D stereo views, lack a natural sense of depth that human vision employs to understand spatial relationships.

The OnSight system uses holographic computing to overlay visual information and rover data into the user’s field of view. Holographic computing blends a view of the physical world with computer-generated imagery to create a hybrid of real and virtual.

To view this holographic realm, members of the Curiosity mission team don a Microsoft HoloLens device, which surrounds them with images from the rover’s Martian field site. They then can stroll around the rocky surface or crouch down to examine rocky outcrops from different angles. The tool provides access to scientists and engineers looking to interact with Mars in a more natural, human way.

“Previously, our Mars explorers have been stuck on one side of a computer screen. This tool gives them the ability to explore the rover’s surroundings much as an Earth geologist would do field work here on our planet,” said Norris.

The OnSight tool also will be useful for planning rover operations. For example, scientists can program activities for many of the rover’s science instruments by looking at a target and using gestures to select menu commands.

The joint effort to develop OnSight with Microsoft grew from an ongoing partnership to investigate advances in human-robot interaction.  The JPL team responsible for OnSight specializes in systems to control robots and spacecraft. The tool will assist researchers in better understanding the environment and workspace of robotic spacecraft — something that can be quite challenging with their traditional suite of tools.

JPL plans to begin testing OnSight in Curiosity mission operations later this year. Future applications may include Mars 2020 rover mission operations, and other applications in support of NASA’s journey to Mars.

JPL manages the Mars Science Laboratory Project for NASA’s Science Mission Directorate in Washington, and built the project’s Curiosity rover.

Learn more about NASA’s journey to Mars at:

http://www.nasa.gov/mars

-end-

Dwayne Brown
Headquarters, Washington
202-358-1726
[email protected]

Guy Webster / Veronica McGregor
Jet Propulsion Laboratory, Pasadena, Calif.
818-354-6278 / 818-354-9452
[email protected] / [email protected]

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Optimizing optimization algorithms – MIT News

Published by in From the WWW on January 24th, 2015 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

This sequence of graphs illustrates the application of the researchers’ technique to a real-world computer vision problem. The solution to each successive problem (red balls) is used to initialize (green arrows) the search for a solution to the next. Courtesy of the researchers

Analysis shows how to get the best results when approximating solutions to complex engineering problems.

Optimization algorithms, which try to find the minimum values of mathematical functions, are everywhere in engineering. Among other things, they’re used to evaluate design tradeoffs, to assess control systems, and to find patterns in data.

One way to solve a difficult optimization problem is to first reduce it to a related but much simpler problem, then gradually add complexity back in, solving each new problem in turn and using its solution as a guide to solving the next one.

This approach seems to work well in practice, but it’s never been characterized theoretically.

This month, at the International Conference on Energy Minimization Methods in Computer Vision and Pattern Recognition, Hossein Mobahi, a postdoc at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), and John Fisher, a senior research scientist at CSAIL, describe a way to generate that sequence of simplified functions that guarantees the best approximation that the method can offer.

Read the original/reproduced from MIT News. By Larry Hardesty, January 21, 2015

“There are some fundamental questions about this method that we answer for the first time,” Mobahi says. “For example, I told you that you start from a simple problem, but I didn’t tell you how you choose that simple problem. There are infinitely many functions you can start with. Which one is good? Even if I tell you what function to start with, there are infinitely many ways to transform that to your actual problem. And that transformation affects what you get at the end.”

Bottoming out

To get a sense of how optimization works, suppose that you’re a canned-food retailer trying to save money on steel, so you want a can design that minimizes the ratio of surface area to volume. That ratio is a function of the can’s height and radius, so if you can find the minimum value of the function, you’ll know the can’s optimal dimensions. If you’re a car designer trying to balance the costs of components made from different materials with the car’s weight and wind resistance, your function — known in optimization as a “cost function” — will be much more complex, but the principle is the same.

Machine-learning algorithms frequently attempt to identify features of data sets that are useful for classification tasks — say, visual features characteristic of cars. Finding the smallest such set of features with the greatest predictive value is also an optimization problem.

“Most of the efficient algorithms that we have for solving optimization tasks work based on local search, which means you initialize them with some guess about the solution, and they try to see in which direction they can improve that, and then they take that step,” Mobahi says. “Using this technique, they can converge to something called a local minimum, which means a point that compared to its neighborhood is lower. But it may not be a global minimum. There could be a point that is much lower but farther away.”

A local minimum is guaranteed to be a global minimum, however, if the function is convex, meaning that it slopes everywhere toward its minimum. The function y = x2 is convex, since it describes a parabola centered at the origin. The function y = sin x is not, since it describes a sine wave that undulates up and down.

Smooth sailing

Mobahi and Fisher’s method begins by trying to find a convex approximation of an optimization problem, using a technique called Gaussian smoothing. Gaussian smoothing converts the cost function into a related function that gives not the value that the cost function would, but a weighted average of all the surrounding values. This has the effect of smoothing out any abrupt dips or ascents in the cost function’s graph.

The weights assigned the surrounding values are determined by a Gaussian function, or normal distribution — the bell curve familiar from basic statistics. Nearby values count more toward the average than distant values do.

The width of a Gaussian function is determined by a single parameter. Mobahi and Fisher begin with a very wide Gaussian, which, under certain conditions, yields a convex function. Then they steadily contract the width of the Gaussian, generating a series of intermediary problems. At each stage, they use the solution to the last problem to initialize the search for a solution to the next one. By the time the width of the distribution has shrunk to zero, they’ve recovered the original cost function, since every value is simply the average of itself.

“The continuation method for optimization is something that is really widely used in practice, widely used in computer vision, for solving alignment problems, for solving tracking problems, a bunch of different places, but it’s not very well understood,” says John Wright, an assistant professor of electrical engineering at Columbia University who was not involved in this work. “The interesting thing about Hossein’s work in general, and this paper in particular, is that he’s really digging into this continuation method and trying to see what we can say analytically about this.”

“The practical utility of that is, there might be any number of different ways that you could go about doing smoothing or trying to do coarse-to-fine optimization,” Wright adds. “If you know ahead of time that there’s a right one, then you don’t waste a lot of time pursuing the wrong ones. You have a recipe rather than having to look around.”

Read the original/reproduced from MIT News. By Larry Hardesty, January 21, 2015

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Facebook Offers Artificial Intelligence Tech to Open Source Group – BITS/NY Times

Published by in From the WWW on January 21st, 2015 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Mark Zuckerberg, chief executive of Facebook. By releasing tools for computers to researchers, Facebook will also be able to accelerate its own Artificial Intelligence projects.Credit Jose Miguel Gomez/Reuters

Facebook wants the world to see a lot more patterns and predictions. The company said Friday that it was donating for public use several powerful tools for computers, including the means to go through huge amounts of data, looking for common elements of information. The products, used in a so-called neural network of machines, can speed pattern recognition by up to 23.5 times, Facebook said.

Read the original/reproduced from BITS/NY Times. By Quentin Hardy

The tools will be donated to Torch, an open source software project that is focused on a kind of data analysis known as deep learning. Deep learning is a type of machine learning that mimics how scientists think the brain works, over time making associations that separate meaningless information from meaningful signals.

Companies like Facebook, Google, Microsoft and Twitter use Torch to figure out things like the probable contents of an image, or what ad to put in front of you next.

“It’s very useful for neural nets and artificial intelligence in general,” said Soumith Chintala, a research engineer at Facebook AI Research, Facebook’s lab for advanced computing. He is also one of the creators of the Torch project. Aside from big companies, he said, Torch can be useful for “start-ups, university labs.”

Certainly, Facebook’s move shows a bit of enlightened self-interest. By releasing the tools to a large community of researchers and developers, Facebook will also be able to accelerate its own AI projects. Mark Zuckerberg has previously cited such open source tactics as his reason for starting the Open Compute Initiative, an open source effort to catch up with Google, Amazon and Yahoo on building big data centers.

Torch is also useful in computer vision, or the recognition of objects in the physical world, as well as question answering systems. Mr. Chintala said his group had fed a machine a simplified version of “The Lord of the Rings” novels and the computer can understand and answer basic questions about the book.

“It’s very early, but it shows incredible promise,” he said. Facebook can already look at some sentences, he said, and figure out what kind of hashtag should be associated with the words, which could be useful in better understanding people’s intentions. Such techniques could also be used in determining the intention behind an Internet search, something Google does not do on its regular search.

Besides the tools for training neural nets faster, Facebook’s donations include a new means of training multiple computer processors at the same time, a means of cataloging words when analyzing language and tools for better speech recognition software.

Read the original/reproduced from BITS/NY Times

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Evolutionary approaches to big-data problems – MIT News

Published by in From the WWW on January 17th, 2015 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Una-May O’Reilly applies machine learning and evolutionary algorithms to tackle some of the world’s biggest big-data challenges.

The AnyScale Learning For All (ALFA) Group at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) aims to solve the most challenging big-data problems — questions that go beyond the scope of typical analytics. ALFA applies the latest machine learning and evolutionary computing concepts to target very complex problems that involve high dimensionality.

Reproduced from/Read the original at MIT News. By Eric Brown | MIT Industrial Liaison Program
January 14, 2015

Need expertise with Cloud / Internet Scale computing / Hadoop / Big Data / Machine Learning / Algorithms / Architectures etc. ? Contact me - i can help – these are primary expertise areas.

“People have data coming at them from so many different channels these days,” says ALFA director Una-May O’Reilly, a principal research scientist at CSAIL. “We’re helping them connect and link the data between those channels.”

The ALFA Group has taken on challenges ranging from laying out wind farms to studying and categorizing the beats in blood pressure data in order to predict drops and spikes. The group is also analyzing huge volumes of recorded click data to predict MOOC-learning behavior, and is even helping the IRS protect against costly tax-evasion schemes.

ALFA prefers the challenge of working with raw data that comes directly from the source. It then investigates the data with a variety of techniques, most of which involve scalable machine learning and evolutionary computing algorithms.

“Machine learning is very useful for retrospectively looking back at the data to help you predict the future,” says O’Reilly. “Evolutionary computation can be used in the same way, and it’s particularly well suited to large-scale problems with very high dimensions.”

In the past, machine learning was challenged by the lack of sufficient data to infer predictive models or classification labels, says O’Reilly. “Now we have too much data, so we have scalable machine learning to try to process a vast quantity of data exemplars,” she says. “We also need to improve machine learning’s capability to cope with the additional variables that come with extremely high dimensional problems.”

O’Reilly has a particular interest in ALFA’s other major tool: evolutionary computing. “Taking ideas from evolution, like population-based adaptation and genetic inheritance, and bringing them into computational models is really effective,” she says. “In engineering, we often use evolutionary algorithms like covariance-matrix adaptation or discrete-valued algorithms for optimization. Also, one can parallelize evolutionary algorithms almost embarrassingly easily, which allows it to handle a lot of the latest data-knowledge discovery problems.”

Within the evolutionary field, O’Reilly is especially interested in genetic programming, or as she defines it, “the evolution of programs.” “We distribute the genetic programming algorithms over many nodes and then factor the data across the nodes,” she explains. “We take all the independent solutions we can compute in parallel and bring them together. We then eliminate the weaker ones and collectively fuse the stronger ones to create an ensemble. We’ve shown that ensemble based models are more accurate than a single model based on all the data.”

Laying out wind farms

One of ALFA’s most successful projects has been in developing algorithms to help design wind farms. The problem is marked by very high dimensionality, especially when hundreds of turbines are involved, says O’Reilly.

“One can see great efficiency gains in optimizing the placement of turbines, but it’s a really complex problem,” she says. “First, there are the parameters of the turbine itself: its energy gain, its height, and its proximity cone. You must find out how much wind is required for the site and then acquire the finer detailed information about where the wind is coming from and in what quantities. You have to factor in the topographical conditions of the land and the way the wind sweeps through it.”

The most difficult variable is the wake effect of one turbine on the turbines behind it, says O’Reilly. “We have to do very complex flow modeling to be able to calculate the loss behind each turbine.”

ALFA discovered how to apply parallelized evolutionary algorithms that could scale up for wind farms of a thousand plus turbines. “We were able to scale to lay out turbines on a bigger scale than anyone had ever done before,” says O’Reilly.

More recently, ALFA has been building a generative template for site design. “Now, we’re using evolutionary concepts to develop a program that can lay out any set of turbines on any site,” she says. “We’re building a design process rather than the site design itself.”

GigaBEATS: Making sense of blood pressure data

Many of the same evolutionary and machine-learning concepts used to lay out a wind farm can also be applied to gleaning insights from clinical data. ALFA is attempting to elicit useful information from the growing volume of physiological data collected from medical sensors. The data include measurement of everything from sleep patterns to ECG and blood pressure.

“It’s hard for clinicians to understand such a high volume of data,” says O’Reilly. “We’re interested in taking signal-level information and combining it with machine learning to make better predictions.”

Researchers tend to collect a small amount of data from a small sample, and do a study that takes over 18 months, says O’Reilly. “We want to take that 18 months and reduce it to hours,” she says.

ALFA is working on a project called GigaBEATS that extracts knowledge from very large sets of physiological data. Initially, the project has studied blood-pressure data taken from thousands of patients in critical care units.

“We are examining the microscopic characteristics of every beat,” says O’Reilly. “Eventually, we will aggregate those characteristics in terms of historical segments that allow us to predict blood pressure spikes.”

The ALFA group has created a database called BeatDB that collects not only the beats of the waveforms, but “a set of properties or features of every beat,” says O’Reilly. BeatDB has already stored a billion blood pressure beat features from more than 5,000 patients.

“For every beat we describe a time-series set of morphological features,” explains O’Reilly. “Once we establish a solid set of fundamental data about the signals, we can provide technology as services on top of that, allowing new beats to be added and processed.”

Because BeatDB enables beats to be aggregated into segments, physicians can better decide how much history is needed to make a prediction. “To predict a blood pressure drop 15 minutes ahead, you might need hours of patient data,” says O’Reilly. “Because the BeatDB data is organized, and leverages machine learning algorithms, physicians don’t have to compute this over and over again. They can experiment with how much data and lead time is required, and then check the accuracy of their models.”

Recently, O’Reilly has begun to use the technology to explore ECG data. “We’re hoping to look at data that might be collected in context of the quantified self,” says O’Reilly, referring to the emerging practice of wearing fitness bracelets to track one’s internal data.

“More and more people are instrumenting themselves by wearing a Fitbit that tells them whether they’re tired or how well they sleep,” says O’Reilly. “Interpreting all these bodily signals is similar to the GigaBEATS project. A BeatDB-like database and cloud-based facility could be set up around these signals to help interpret them.”

Need expertise with Cloud / Internet Scale computing / Hadoop / Big Data / Machine Learning / Algorithms / Architectures etc. ? Contact me - i can help – these are primary expertise areas.

Reproduced from/Read the original at MIT News. By Eric Brown | MIT Industrial Liaison Program
January 14, 2015

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

The Sound of Chirping Birds in the Control Centre – Bielefeld University

Published by in From the WWW on January 17th, 2015 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Computer scientists develop a method for monitoring by listening in factories, operating rooms, and postal logistics centres. When the alarm light starts blinking in the control room of a factory, the problem has already occurred. Computer scientists at the Cluster of Excellence Cognitive Interactive Technology (CITEC) at Bielefeld University and the University of Vienna have developed a method that allows control room staff to monitor several processes at the same time, which enables them to take preventative action. The trick: processes are coded with sounds. Workers hear, for example, whether there is enough material on the conveyor belt and can react before the supply is used up.

Read the original/reproduced from Bielefeld University

Until now, control centres have had to use computer screens and control desks to monitor processes. The new system SoProMon uses sound to let workers hear when something goes wrong. Photo: Bielefeld University

Processes must be monitored in a wide range of industries – whether at a factory, in the operating room, at a postal logistics centre, or in space flight. “Up until now, monitoring processes has been a visually supported field of work,” explains CITEC computer scientist Thomas Hermann. “Computer screens or displays at control desks show whether everything is in order. With large quantities information, though, something can easily slip by unnoticed. Staff must maintain a high level of concentration to keep all processes in check,” says Hermann, who leads the “Ambient Intelligence” research group at CITEC. “With our new system, we use additional acoustic signals. Our method thus enables a kind of passive monitoring, that is, surveillance that can be accomplished alongside other tasks.”

Dr. Hermann developed the new system together with Tobias Hildebrandt and Professor Dr. Stefanie Rinderle-Ma from the University of Vienna. Using the example of a production plant, a simulation shows how the method works. Each station is given a different sound: the delivery is announced with the sound of chirping birds, bees buzzing are assigned to another station, and the sounds of branches rustling in the wind are heard at another.” Outgoing shipments are coded with the sound of dripping water. If everything is running normally, all four sounds are discretely in the background. “We chose these woodland sounds because they compose an acoustic ambience that is both pleasant to listen to and unobtrusive,” explains Hermann. A critical situation is then introduced at one of the stations –the finished product is beginning to back up at the outgoing shipping station – and the sound that belongs to this station becomes increasingly loud. A staff member can then react before a disruption occurs. In this case, the worker would determine that the products get loaded at an earlier stage, thus preventing an emergency stop on the shop floor. According to Dr. Hermann’s colleague Tobias Hildebrandt, who studies business information technology at the University of Vienna, the system is not only suitable for production facilities. “It could be introduced in almost every industry in which processes are centrally controlled or monitored – everywhere from hospitals to traffic control desks for trains and buses,” says Hildebrandt.

Thomas Hermann is an expert in sonification, the systematic representation of data as sound. Photo: CITEC/Bielefeld University

“Monitoring processes by listening has several advantages,” explains Thomas Hermann. “Distraction is less of an issue in comparison to visual monitoring. Moreover, we use our ears to perceive everything that is going on around us. With our eyes, we must look precisely at the thing that is important to the current task. Generally speaking, the advantage of listening is that it happens all the time. We can close our eyelids, but there are no “earlids” that can be shut,” explains the computer scientist. Furthermore, according to Hermann, auditory stimuli are processed more quickly than visual stimuli. “The special feature of listening is that people are able to recognize the smallest changes in tones. For example, in a car, one can hear subtle changes in road conditions based on the sounds of driving on the street.”

The new system is called “SoProMon.” The name stands for “Sonification system for process monitoring as secondary task.” The research paper authored the three developers was awarded “Best Paper” in November 2014 at the IEFF International Conference on Cognitive Infocommunications (CogInfoCom) in Vietri sul Mare, Italy.

Thomas Hermann is an expert in sonification, the systematic representation of data using non-speech sound. In 2012, he worked with Berlin media artists to produce a software that perceptualizes the German “Twitterscape” in sounds. The software automatically assigns a sound to a topic. When a Twitter user writes a short message touching on the topic, the assigned sound can be heard. Dr. Hermann’s research group “Ambient Intelligence” develops intelligent environments, novel interactive objects, and attentional systems to support humans in everyday life. In addition to sonification, the researchers also focus on multimodal interaction, which is the principle that a device communicates with its user through several senses – from hearing to touch – and can also be controlled by the user via different sensory inputs.

Read the original/reproduced from Bielefeld University

Further information is available online at:
• Video and Audio Demonstration of the SoProMon System: http://pub.uni-bielefeld.de/data/2695709
• Website of the “Ambient Intelligence“ research group: www.cit-ec.de/ami
• Bielefeld sonification researcher Dr. Thomas Hermann codes the German Twitterscape in sounds (Press release from 16 January 2012, in German): http://ekvv.uni-bielefeld.de/blog/pressemitteilungen/entry/bielefelder_sonifikationsforscher_dr_thomas_hermann

Original Publication:
Tobias Hildebrandt, Thomas Hermann, Stefanie Rinderle-Ma: A Sonification System for Process Monitoring as Secondary Task. Proceedings of the 5th IEEE Conference on Cognitive Infocommunication (in print), http://eprints.cs.univie.ac.at/4211/1/HildebrandtHermannRinderleMa2014-coginfocom_author.pdf

Kontakt:
Dr. Thomas Hermann, Bielefeld University
Cluster of Excellence Cognitive Information Technology (CITEC)
Telephone: +49 521 106- 12140
Email: [email protected]

Tobias Hildebrandt, University of Vienna
Faculty of Informatics
Telephone: +43-1-4277-79141
Email: [email protected]

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Leak Reminds Us to Watch Palantir Because It Could Be Watching Us – IEEE Spectrum

Published by in From the WWW on January 14th, 2015 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Photo: Tekla Perry Though Palantir’s name is clearly visible through the windows of the company’s Palo Alto cafeteria, most of the company’s office buildings are unmarked, and a guard outside the cafeteria tries to discourage the taking of photographs.

Walk around downtown Palo Alto these days and you’ll spot a number of prime office buildings with frosted or shaded windows and no signs on their doors. Stealth startups? Hardly. Odds are you’re looking at real estate leased by data analysis pioneer Palantir.

Read the full/original article from IEEE Spectrum

Palantir was founded in 2004 by a team that includes PayPal cofounder Peter Thiel with a plan to “reduce terrorism while preserving civil liberties” using technology similar to that developed at PayPal for antifraud efforts. The CIA was both an early investor and an early customer, and Thiel himself over the years invested some $30 million in the venture. Palantir continues to feed services and technology to U.S. national security agencies, but now also does work for state and local governments and corporations. It’s big and getting bigger, with a reported 1200 employees in 2013 at some nine locations around the world, including McLean, Va. (of course), Abu Dhabi, Tel Aviv, Asia, Australia, and New Zealand. And it’s still hiring.

Photo: Tekla Perry Palantir’s Palo Alto employees stream into the company’s cafeteria building for lunch.

Palantir is worth a lot: it was officially valued at $9 billion in 2013, and a recent funding round reportedly bumped that up to over $11 billion; but there are questions whether such a company could survive going public and the disclosures and pressures that would involve.

Palantir’s software acts as a natural language interface to sets of data and allows users to connect different types of data sets to better understand relationships. Palantir’s web site advertises that it can help people in healthcare, finance, disaster preparedness, cybersecurity, and other areas. It gives a few examples of what people have done with its tools—like uncover the GhostNet network of infected computers and spot shipments of elephant ivory. And last year the New York Times reported on a few more projects—like helping Hershey increase chocolate profits and JPMorgan Chase to sell foreclosed homes.

Photo: Tekla Perry A Palantir employee strolls through downtown Palo Alto.

But generally, Palantir doesn’t talk much (the software was rumored to have pinpointed Osama bin Laden’s location, but that report has never been confirmed). That’s why the news site TechCrunch got a lot of attention this week when it reported that it had gotten its hands on an investment prospectus that featured success stories of organizations using Palantir’s tools, even though the revelations were less than earthshattering. Palantir’s software, as reported by TechCrunch, has been used by:

  1. the Securities Investment Protection Corporation to nail Bernie Madoff by sorting through 40 years of records
  2. the Los Angeles Police Department to support “cops on the street and the officers doing the investigations”
  3. the Centers for Medicate and Medicare services to identify potentially fraudulent medical providers
  4. the Pentagon to track roadside bomb deployment—and to discover that garage-door openers were being used as detonators
  5. the CIA and FBI to connect their databases
  6. the International Consortium of Investigative Journalists to look into trafficking of human tissue

Photo: Tekla Perry Many of Palantir’s buildings are unmarked, the sign on this one is subtle, to say the least.

Some say this recent leak was a blatant publicity attempt by Palantir as it seeks additional investors. If so, it worked; if not, it’s no disaster for the organization, and none of the information released compromises current efforts. It did turn attention to a company that has, for the most part, has been avoiding the spotlight, except when it does good for the community, like its recent effort to teach low-income high school students how to code or when employees complain about the lack of housing near their downtown Palo Alto offices.

We probably should be paying more attention to Palantir. The Internet of Things is quickly increasing the amount of data about our lives that is available, and it would nice to know what might be done with it—and who might be doing it—before the cloud knows about every breath we take. The interest in this “leak” demonstrates that it’s time for Palantir to get out from behind that frosted glass. Or at least put signs on its doors, because it would be nice to know if downtown Palo Alto really has become a Palantir campus.

Read the full/original article from IEEE Spectrum

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Tags:

Toward quantum chips – MIT News

Published by in From the WWW on January 14th, 2015 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

One of the researchers’ new photon detectors, deposited athwart a light channel — or “waveguide” (horizontal black band) — on a silicon optical chip.
Image courtesy of Nature Communications

Packing single-photon detectors on an optical chip is a crucial step toward quantum-computational circuits. A team of researchers has built an array of light detectors sensitive enough to register the arrival of individual light particles, or photons, and mounted them on a silicon optical chip. Such arrays are crucial components of devices that use photons to perform quantum computations.

Single-photon detectors are notoriously temperamental: Of 100 deposited on a chip using standard manufacturing techniques, only a handful will generally work. In a paper appearing today in Nature Communications, the researchers at MIT and elsewhere describe a procedure for fabricating and testing the detectors separately and then transferring those that work to an optical chip built using standard manufacturing processes.

Read the original/reproduced from MIT News

In addition to yielding much denser and larger arrays, the approach also increases the detectors’ sensitivity. In experiments, the researchers found that their detectors were up to 100 times more likely to accurately register the arrival of a single photon than those found in earlier arrays.

“You make both parts — the detectors and the photonic chip — through their best fabrication process, which is dedicated, and then bring them together,” explains Faraz Najafi, a graduate student in electrical engineering and computer science at MIT and first author on the new paper.

Thinking small

According to quantum mechanics, tiny physical particles are, counterintuitively, able to inhabit mutually exclusive states at the same time. A computational element made from such a particle — known as a quantum bit, or qubit — could thus represent zero and one simultaneously. If multiple qubits are “entangled,” meaning that their quantum states depend on each other, then a single quantum computation is, in some sense, like performing many computations in parallel.

With most particles, entanglement is difficult to maintain, but it’s relatively easy with photons. For that reason, optical systems are a promising approach to quantum computation. But any quantum computer — say, one whose qubits are laser-trapped ions or nitrogen atoms embedded in diamond — would still benefit from using entangled photons to move quantum information around.

“Because ultimately one will want to make such optical processors with maybe tens or hundreds of photonic qubits, it becomes unwieldy to do this using traditional optical components,” says Dirk Englund, the Jamieson Career Development Assistant Professor in Electrical Engineering and Computer Science at MIT and corresponding author on the new paper. “It’s not only unwieldy but probably impossible, because if you tried to build it on a large optical table, simply the random motion of the table would cause noise on these optical states. So there’s been an effort to miniaturize these optical circuits onto photonic integrated circuits.”

The project was a collaboration between Englund’s group and the Quantum Nanostructures and Nanofabrication Group, which is led by Karl Berggren, an associate professor of electrical engineering and computer science, and of which Najafi is a member. The MIT researchers were also joined by colleagues at IBM and NASA’s Jet Propulsion Laboratory.

Relocation

The researchers’ process begins with a silicon optical chip made using conventional manufacturing techniques. On a separate silicon chip, they grow a thin, flexible film of silicon nitride, upon which they deposit the superconductor niobium nitride in a pattern useful for photon detection. At both ends of the resulting detector, they deposit gold electrodes.

Then, to one end of the silicon nitride film, they attach a small droplet of polydimethylsiloxane, a type of silicone. They then press a tungsten probe, typically used to measure voltages in experimental chips, against the silicone.

“It’s almost like Silly Putty,” Englund says. “You put it down, it spreads out and makes high surface-contact area, and when you pick it up quickly, it will maintain that large surface area. And then it relaxes back so that it comes back to one point. It’s like if you try to pick up a coin with your finger. You press on it and pick it up quickly, and shortly after, it will fall off.”

With the tungsten probe, the researchers peel the film off its substrate and attach it to the optical chip.

In previous arrays, the detectors registered only 0.2 percent of the single photons directed at them. Even on-chip detectors deposited individually have historically topped out at about 2 percent. But the detectors on the researchers’ new chip got as high as 20 percent. That’s still a long way from the 90 percent or more required for a practical quantum circuit, but it’s a big step in the right direction.

“This work is a technical tour de force,” says Robert Hadfield, a professor of photonics at the University of Glasgow who was not involved in the research. “There is potential for scale-up to large circuits requiring hundreds of detectors using commercial pick-and-place technology.”

Read the original/reproduced from MIT News

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

The 8080 chip at 40: What’s next for the mighty microprocessor? – ComputerWorld

Published by in From the WWW on January 14th, 2015 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Credit: Intel

It came out in 1974 and was the basis of the MITS Altair 8800, for which two guys named Bill Gates and Paul Allen wrote BASIC, and millions of people began to realize that they, too, could have their very own, personal, computer. Now, some 40 years after the debut of the Intel 8080 microprocessor, the industry can point to direct descendants of the chip that are astronomically more powerful (see sidebar, below). So what’s in store for the next four decades?

Reproduced from/Read the original/full article from ComputerWorld

For those who were involved with, or watched, the birth of the 8080 and know about the resulting PC industry and today’s digital environment, escalating hardware specs aren’t the concern. These industry watchers are more concerned with the decisions that the computer industry, and humanity as a whole, will face in the coming decades.

While at Intel, Italian immigrant Fredericco Faggin designed the 8080 as an enhancement of Intel’s 8008 chip — the first eight-bit microprocessor, which had debuted two years earlier. The 8008, in turn, had been a single-chip emulation of the processor in the Datapoint 2200, a desktop computer introduced by the Computer Terminal Corp. of Texas in late 1970.

Chief among the Intel 8080’s many improvements was the use of 40 connector pins, as opposed to 18 in the 8008. The presence of only 18 pins meant that some I/O lines had to share pins. That had forced designers to use several dozen support chips to multiplex the I/O lines on the 8008, making the chip impractical for many uses, especially for hobbyists.

“The 8080 opened the market suggested by the 8008,” says Faggin.

As for the future, he says he hopes to see development that doesn’t resemble the past. “Today’s computers are no different in concept from the ones used in the early 1950s, with a processor and memory and algorithms executed in sequence,” Faggin laments, and he’d like to see that change.

He holds out some hope for the work done to mimic other processes, particularly those in biology. “The way information processing is done inside a living cell is completely different from conventional computing. In living cells it’s done by non-linear dynamic systems whose complexity defies the imagination — billions of parts exhibiting near-chaotic behavior. But imagine the big win when we understand the process.

Fredericco Faggin, holding a 4004 chip in 2011. The 4004, which he designed, was a precursor to the 8080.

“Forty years from now we will have begun to crack the nut — it will take huge computers just to do the simulations of structures with that kind of dynamic behavior,” Faggin says. “Meanwhile, progress in computation will continue using the strategies we have developed.”

Nick Tredennick, who in the late 1970s was a designer for the Motorola 68000 processor later used in the original Apple Macintosh, agrees. “The big advances I see coming in the next four decades would be our understanding of what I call bio-informatics, based on biological systems,” he says. “We will start to understand and copy the solutions that nature has already evolved.”

Carl Helmers, who founded Byte magazine for the PC industry in 1975, adds, “With all our modern silicon technology, we are still only implementing specific realizations of universal Turing machines, building on the now nearly 70-year-old concept of the Von Neumann architecture.”
Human-digital synthesis?

How we will interface with computers in the future is of more concern to most experts than is the nature of the computers themselves.

“The last four decades were about creating the technical environment, while the next four will be about merging the human and the digital domains, merging the decision-making of the human being with the number-crunching of a machine,” says Rob Enderle, an industry analyst for the past three decades.

. . .

Continue reading the full article from ComputerWorld

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

EM-DOSBOX in-browser emulator – MS-DOS Games Library

Published by in Snippet on January 11th, 2015 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Prince of Persia!

The Internet Archive beta-released a library of MS-DOS games bootable and playable via the EM-DOSBOX in-browser emulator. Games include Oregon Trail, Prince of Persia, Wolfenstein, Sim City… take a trip down memory lane, or show young whippersnappers how hard life was back in your day.

Software Library: MS-DOS Games

The collection includes action, strategy, adventure and other unique genres of game and entertainment software. Through the use of the EM-DOSBOX in-browser emulator, these programs are bootable and playable. Please be aware this browser-based emulation is still in beta – contact Jason Scott, Software Curator, if there are issues or questions. Thanks to eXo for contributions and assistance with this archive.

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

© all content copyright respective owners
CyberChimps