DARPA’s Chipset Runs an Astonishing 1 Trillion Cycles Per Second – Gizmodo

Published by in From the WWW on October 31st, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Photo: Singkham

DARPA’s boffins have just set a new world record in computing with a solid-state integrated circuit that packs the power of a supercomputer into a single chip. You thought your six-core, 3.9 Ghz Mac Pro was a computational powerhouse? This single chip is around 250 times as fast.

The record-setting chip, dubbed the Terahertz Monolithic Integrated Circuit (TMI), is the handiwork of Northrop Grumman as part of DARPA’s Terahertz Electronics program. The TMI’s overwhelming speed cannot be understated. It blew the doors off the existing 850 GHz record, set in 2012, by a cool 150 billion cycles per second.

Read the full article/reproduced from GIZMODO

While the chip itself is runs at Ludicrous Speed, researchers have for years struggled to find a way to control that power. As the DARPA press release explains:

Current electronics using solid-state technologies have largely been unable to access the sub-millimeter band of the electromagnetic spectrum due to insufficient transistor performance. To address the “terahertz gap,” engineers have traditionally used frequency conversion—converting alternating current at one frequency to alternating current at another frequency—to multiply circuit operating frequencies up from millimeter-wave frequencies. This approach, however, restricts the output power of electrical devices and adversely affects signal-to-noise ratio. Frequency conversion also increases device size, weight and power supply requirements.

But that’s where the TMI’s amplification comes in. The system has exhibited gain levels (the difference between input and output signals, measured on the logarithmic scale) of 6 decibels at 1THz, which according to Dev Palmer, DARPA program manager, is strong enough to begin seriously researching real-world applications.

“This breakthrough could lead to revolutionary technologies such as high-resolution security imaging systems, improved collision-avoidance radar, communications networks with many times the capacity of current systems and spectrometers that could detect potentially dangerous chemicals and explosives with much greater sensitivity,” said Palmer said in a press release.

There’s no word, however, on where or when the technology will actually make its first appearance, but don’t expect this to be showing up in keynotes anytime soon. [DARPA]

Read the full article/reproduced from GIZMODO

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Researchers Take Big-Data Approach to Estimate Range of Electric Vehicles – NC State University

Published by in From the WWW on October 26th, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Researchers from North Carolina State University have developed new software that estimates how much farther electric vehicles can drive before needing to recharge. The new technique requires drivers to plug in their destination and automatically pulls in data on a host of variables to predict energy use for the vehicle.

“Electric cars already have range-estimation software, but we believe our approach is more accurate,” says Dr. Habiballah Rahimi-Eichi, a postdoctoral researcher at NC State and lead author of a paper on the work.

Dr. Mo-Yuen Chow. Matt Shipman.

Read the original/reproduced/full article from NC State University

Need expertise with Cloud / Internet Scale computing / Hadoop / Big Data / Algorithms / Machine Learning / Architectures etc. ? Contact me - i can help – this is one of my primary expertise.

“Existing technologies estimate remaining range based on average energy consumption of the past 5 miles, 15 miles, etc.,” Rahimi-Eichi says. “By plugging in the destination, our software looks at traffic data, whether you’ll be on the highway or in the city, weather, road grade, and other variables. This predictive, big-data approach is a significant step forward, reducing the range estimation error to a couple of miles. In some case studies, we were able to get 95 percent range estimation accuracy.”

The software takes all of the data related to the route between starting point and destination and uses big data techniques to determine which pieces of information are important and extract key features that can be plugged into an algorithm to estimate how far the vehicle can go before recharging.

But two other variables are also plugged into the algorithm: the performance characteristics of the vehicle and its battery; and the amount of charge remaining in the battery. The state of charge is estimated using a patented technique developed by Rahimi-Eichi and Dr. Mo-Yuen Chow in 2012. Chow is a professor of electrical and computer engineering at NC State and a co-author of the paper.

“People have a lot of ‘range anxiety’ in regard to electric vehicles – they’re afraid they’ll get stuck on the side of the road,” Chow says. “Hopefully, our new range estimation software will make people more confident about using electric vehicles.”

The paper, “Big-Data Framework for Electric Vehicle Range Estimation,” will be presented at the 40th Annual Conference of the IEEE Industrial Electronics Society, being held Oct. 29 to Nov. 1 in Dallas, Texas.

Note to Editors: The presentation abstract follows.

“Big-Data Framework for Electric Vehicle Range Estimation”

Authors: Habiballah Rahimi-Eichi and Mo-Yuen Chow, North Carolina State University

Presented: Oct. 29-Nov. 1, 40th Annual Conference of the IEEE Industrial Electronics Society, Dallas, Texas

Abstract: Range anxiety is a major contributor in low penetration of electric vehicles into the transportation market. Although several methods have been developed to estimate the remaining charge of the battery, the remaining driving range is a parameter that is related to different standard, historical, and real-time data. Most of the existing range estimation approaches are established on an overly simplified model that relies on a limited collection of data. However, the sensitivity and reliability of the range estimation algorithm changes under different environmental and operating conditions; and it is necessary to have a structure that is able to consider all data related to the range estimation. In this paper, we propose a big data based range estimation framework that is able to collect different data with various structures from numerous resources; organize and analyze the data, and incorporate them in the range estimation algorithm. MATLAB/SIMULINK code is demonstrated to read real-time and historical data from different web databases and calculate the remaining driving range.

Read the original/reproduced/full article from NC State University

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Tags:

How the news feed on Facebook decides what you get to see – MIT Review

Published by in From the WWW, Snippet on October 25th, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Karrie Karahalios

Karrie Karahalios

Algorithm Awareness – Increasingly, it is algorithms that choose which products to recommend to us and algorithms that decide whether we should receive a new credit card. But these algorithms are buried outside our perception. How does one begin to make sense of these mysterious hidden forces?

By Karrie Karahalios. October 21, 2014. Reproduced from/read the full article at MIT Reviews

The question gained resonance recently when Facebook revealed a scientific study on “emotion contagion” that had been conducted by means of its news feed. The study showed that displaying fewer positive updates in people’s feeds causes them to post fewer positive and more negative messages of their own. This result is interesting but disturbing, revealing the full power of Facebook’s algorithmic influence as well as its willingness to use it.

To explore the issue of algorithmic awareness, in 2013 three colleagues and I built a tool that helps people understand how their Facebook news feed works.

Using Facebook’s own programming interface, our tool displayed a list of stories that appeared on one’s news feed on the left half of the screen. On the right, users saw a list of stories posted by their entire friend network—that is, they saw the unadulterated feed with no algorithmic curation or manipulation.

A third panel showed which friends’ posts were predominantly hidden and which friends’ posts appeared most often. Finally, the tool allowed users to manually choose which posts they desired to see and which posts they wanted to discard.

We recruited 40 people—a small sample but one closely representative of the demographics of the U.S.—to participate in a study to see how they made sense of their news feed. Some were shocked to learn that their feed was manipulated at all. But by the end of our study, as participants chose what posts they wanted to see, they found value in the feed they curated.

When we followed up months later, many said they felt empowered. Some had changed their Facebook settings so they could manipulate the feed themselves. Of the 40 participants, one person quit using Facebook altogether because it violated an expectation of how a feed should work.

The public outcry over Facebook’s emotion study showed that few people truly grasp the way algorithms shape the world we experience. And our research shows the importance of empowering people to take control of that experience.

We deserve to understand the power that algorithms hold over us, for better or worse.

Reproduced from/read the full article at MIT Reviews

Karrie Karahalios is an associate professor of computer science at the University of Illinois.

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

The Exascale Revolution – HPC Wire

Published by in From the WWW on October 25th, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

The post-petascale era is marked by systems with far greater parallelism and architectural complexity. Failing some game-changing innovation, crossing the next 1000x performance barrier will be more challenging than previous efforts. At the 2014 Argonne National Laboratory Training Program on Extreme Scale Computing (ATPESC), held in August, Professor Pete Beckman delivered a talk on “Exascale Architecture Trends” and their impact on the programming and executing of computational science and engineering applications.

By Tiffany Trader. Read the full article / reproduced from HPCWire

It’s a unique point in time, says Beckman, director of the Exascale Technology and Computing Institute. While we can’t completely future-proof code, there are trends that will impact programming best practices.

When it comes to the current state of HPC, Beckman shares a chart from Peter Kogge of Notre Dame detailing three major trends, which can be traced back to 2004.

  • The power ceiling
  • The clock ceiling
  • Sockets and cores are growing.

As Kogge illustrates, there was a fundamental shift in 2004. Computing reached a point where the chips can’t get any hotter, the clock stopped scaling and there was no more free performance lunch.

“Now the parallelism in your application is increasing dramatically with every generation,” says Beckman. “We have this problem, we can’t make things take much more power per package, we’ve hit the clock ceiling, we’re now scaling by adding parallelism, and there’s a power problem at the heart of this, which translates into all sorts of other problems, with memory and so on.”

To illustrate the power issue, Beckman compares the IBM Blue Gene/Q system to its predecessor the Blue Gene/P system. Blue Gene/Q is about 20 times faster and uses four times more power, making it five times more power efficient. This seems like very good progress. But with further extrapolation, it is evident that an exascale system built on this 5x trajectory would consume 64MW of power. To add further perspective, consider a MW costs about $1 million a year in electricity, putting this cost at $64 million a year.

Beckman emphasizes the international nature of this problem. Japan, for example, has set an ambitious target of 2020 for its exascale computing strategy, which is being led by RIKEN Advanced Institute for Computational Science. Although they have not locked down all the necessary funding, they estimate a project cost of nearly $1.3 billion.

Regions around the world have come to the conclusion that the exascale finish line is unlike previous 1000x efforts and will require international collaboration. Beckman points to TOP500 list stagnation has indicative of the difficulty of this challenge. In light of this, Japan and the US have signed a formal agreement to collaborate on HPC system software development. The agreement signed at ISC includes significant collaboration.

Europe is likewise pursuing similar agreements with the US and Japan. As part of its Horizon 2020 program, Europe is planning to invest 700 million Euros between 2014 and 2020 to fund next-generation systems. Part of this initiative includes a special interest in establishing a Euro-centric HPC vendor base.

No discussion of the global exascale race would be complete without mentioning China, which has operated the fastest computer in the world, Tianhe-2, for the last three iterations of the TOP500 list. Tianhe-2 is energy-efficient for its size with a power draw of 24MW power including cooling, however the expense has resulted in it’s not being turned on all the time.

Principally an Intel-powered system, Tianhe-2 also contains homegrown elements developed by China’s National University of Defense Technology (NUDT), including SPARC-derived CPUs, a high-speed interconnect, and its operating system, which is a Linux variant. China continues to invest heavily in HPC technology. Beckman says we can expect to see one of the next machine’s from China – likely in the top 10 – comprised entirely of native technology.

Need expertise with Cloud / Internet Scale computing / Hadoop / Big Data / Algorithms / Architectures etc. ? Contact me - i can help – this is one of my primary expertise.

Can the exponential progress continue?

Looking at the classic History of Supercomputing chart, it looks like systems will continue to hit their performance marks if their massive power footprints are tolerable. At the device level, there is stress with regard to feature sizes nearing some fundamental limits. “Unless there is a revolution of some sort, we really can’t get off the curve that is heading towards a 64MW supercomputer,” says Beckman. “It’s about power, both in the number of chips and the total dissipation of each of chips.”

Beckman cites some of the forces of change with regard to software, including memory, threads, messaging, resilience and power. At the level of the programming model and the OS interface, Beckman suggests the need for coherence islands as well as persistence.

With increased parallelism, the notion that equal work is equal time is going away, and variability (noise, jitter) is the new norm. “The architecture will begin to show even more variability between components and your algorithms and your approaches, whether it’s tasks or threads, will address that in the future,” Beckman tells his audience, “and as we look toward exascale, the programmer who can master this feature well, will do well.”

Attracting and training the next generation of HPC users is a top priority for premier HPC centers like Argonne National Laboratory. One way that Argonne tackles this challenge is by holding an intensive summer school in extreme-scale computing. Tracing its summer program back to the 1980s, the presentations are worthwhile not just for the target audience – a select group of mainly PhD students and postdocs – but for anyone who is keenly interested in the state of HPC, where it’s come from and where it’s going.

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Linux Foundation Dronecode Project Takes Flight – eWeek

Published by in From the WWW, Snippet on October 22nd, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

The open-source collaboration project leverages embedded Linux in a bid to open up unmanned aerial vehicles to development.The Linux Foundation is taking its efforts to foster new levels of open-source collaboration to new heights today with launch of the Dronecode Project. Dronecode is an effort to help build an open platform for software that enables nonmilitary unmanned aerial vehicles (UAVs), commonly known as drones. The Dronecode Project is now part of the Linux Foundation Collaboration Projects initiative that brings people, process and technology best practices to open-source code development. The Dronecode Project will join other Linux Foundation Collaboration Projects, including the Yocto Project, which is an effort to build embedded Linux platforms.

By Sean Michael Kerner. Reproduced /read the full article from eWeek

Jim Zemlin, executive director of the Linux Foundation, told eWEEK that Dronecode leverages the Yocto Project and there are potential synergies across the two projects. As to how the Linux Foundation got involved with Dronecode, Zemlin said he was approached by Chris Anderson, founder of the APM (ArduPilotMega) UAV platform, and open-source developer Andrew Tridgell to help them advance the state of open-source drone code. Tridgell is well-known in the open-source development world as a key contributor to the Samba file server.

“The APM UAV project itself is not new and has had active contributors for several years,” Zemlin said. “The project has grown up pretty well, but it has now reached a size where it can benefit from having a neutral place where the project can be housed and people can invest with an equal say.”

The founding members of the Dronecode Project include 3D Robotics, Baidu, Box, DroneDeploy, Intel, jDrones, Laser Navigation, Qualcomm, SkyWard, Squadrone System, Walkera and Yuneec.

“This software is fueling a lot of the UAV industry, and drones are set to be a real growth market,” Zemlin said. “We’re only at the very tip of the iceberg in terms of what we will see.”

Drones are not just hobbyist devices, he emphasized; they also have very useful and practical commercial applications. Drone technology is useful for mapping, conservation activity, and search and rescue operations. In addition to the APM UAV application code, the Dronecode Project includes the PX4 project code. Zemlin expects other kinds of projects within the UAV ecosystem to join the Dronecode Project over time.

Looking forward, Zemlin is confident that the Dronecode Project at the Linux Foundation will lead to better code and more participation by developers and companies. Multiple vendors are using the APM UAV code today in commercial products, he said. When code improvements are now made as part of the Dronecode Project, those improvements can be contributed back to the project.

“The companies that join Dronecode are all aligned in wanting to share the underlying infrastructure software that will enable their products,” Zemlin said.

Sean Michael Kerner is a senior editor at eWEEK and InternetNews.com. Follow him on Twitter @TechJournalist.

Reproduced /read the full article from eWeek

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Researcher builds system to protect against malicious insiders – ComputerWorld

Published by in From the WWW on October 19th, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Algorithms to spot attacks coming from inside the network gets Army support.

Credit: Thinkstock. Algorithms to spot attacks coming from inside the network gets Army support. By Sharon Gaudin. ComputerWorld

Credit: Thinkstock. Algorithms to spot attacks coming from inside the network gets Army support. By Sharon Gaudin. ComputerWorld

When an employee turns on his own company, the results — damaged networks, data theft and even work stoppage — could be devastating. It could rock the company even more than an outside attack because the insider knows where sensitive data is kept, what the passwords are and exactly how to hurt the company the most.

That’s the driving force behind the work that Daphne Yao, associate professor of computer science at Virginia Tech, is doing on cybersecurity. Yao, who received an NSF Career award for her human-behavior inspired malware detection work, is developing algorithms that will alert companies when an employee might be acting maliciously on their network.

Read the full article/reproduced from ComputerWorld

And the Army Research Office has awarded her $150,000 to continue her research into finding new ways to detect anomalies caused by system compromises and malicious insiders.

“The challenge is to understand the intention of the user and what the user is trying to do,” Yao said. “Most are doing legitimate work and they’re working their own project and minding their own business. You need a detection system that can guess what the user is trying to do.”

The crux of Yao’s work is to figure out which employees are simply downloading sensitive files or logging onto the network in the middle of the night because they’re trying to get their work done and which employees may be doing the same things because they’re trying to sell proprietary information or crash the network.

According to a 2012 Symantec report, 60% of companies said they had experienced attacks on their systems to steal proprietary information. The most frequent perpetrators were current or former employees or partners in trusted relationships.

In 1996, for instance, a network administrator at Omega Engineering Inc. planted a software time bomb that eradicated all the programs that ran the company’s manufacturing operations at its Bridgeport, N.J. plant.

The trusted IT administrator, Tim Lloyd, effectively stopped the manufacturing company from being able to manufacture, causing the company $12 million in damages and its footing in the high-tech instrument and measurement market. Eighty workers lost their jobs as a result.

Lloyd was tried and convicted of computer sabotage in federal court.

More recently, in 2013 Edward Snowden leaked classified documents about global surveillance programs that he acquired while working as an NSA contractor.

The same year, Pfc. Bradley Manning, an Army intelligence analyst, was sentenced to 35 years for leaking the largest cache of classified documents in U.S. history.

These are the kinds of insider attacks Yao is working to stop.

The Army Research Office did not respond to a request for comment, but Dan Olds, an analyst with The Gabriel Consulting Group, said he’s not surprised that the military is supporting research into detecting insider threats.

“The U.S. military is very concerned about security these days,” added Olds. “The Bradley Manning leaks highlighted the massive damage that even a lowly Pfc can wreak if given access to a poorly secured IT infrastructure. The Snowden and Manning leaks have had a very severe impact on U.S. intelligence activities, disclosing not only the information gathered, but also showing the sources and methods used to get US intelligence data.”

He also said insider-based attacks normally may not get as much media attention as most hacks, but can potentially cause much greater damage since the attacker at least knows where the keys to the castle are hidden. And if that attacker works in IT, he or she might even have the keys.

“Insider threats are many times the most devastating, as they are the least expected,” said Patrick Moorhead, an analyst with Moor Insights & Strategy. “Companies spend most of their security time and money guarding against external threats…. So that sometimes leaves the inside exposed.”

To combat this, Yao is combining big data, analytics and security to design algorithms that focus on linking human activities with network actions.

Continue reading the full article full article at /reproduced from ComputerWorld

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Brown Dog digs into the deep, dark web – GCN

Published by in From the WWW on October 19th, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Led by Kenton McHenry and Jong Lee of the Image and Spatial Data Analysis division at the National Center for Supercomputing Application (NCSA) at the University of Illinois at Urbana-Champaign, Brown Dog seeks to develop a service that will make uncurated data accessible.

Led by Kenton McHenry and Jong Lee of the Image and Spatial Data Analysis division at the National Center for Supercomputing Application (NCSA) at the University of Illinois at Urbana-Champaign, Brown Dog seeks to develop a service that will make uncurated data accessible.

Unstructured data is the bane of researchers everywhere. Although casual Googlers may be frustrated by not being able to open online files, researchers often need to dig into data trapped in outdated formats and uncurated collections with little or no metadata. And according to IDC, up to 90 percent of big data is “dark,” meaning the contents of such files cannot be easily accessed.

Thus, the Brown Dog solution to a long-tail problem. Read the full article/reproduced from GCN

Led by Kenton McHenry and Jong Lee of the Image and Spatial Data Analysis division at the National Center for Supercomputing Application (NCSA) at the University of Illinois at Urbana-Champaign, Brown Dog seeks to develop a service that will make uncurated data accessible.

“The information age has made it easy for anyone to create and share vast amounts of digital data, including unstructured collections of images, video and audio as well as documents and spreadsheets,” said McHenry. “But the ability to search and use the contents of digital data has become exponentially more difficult.”

Need expertise with Cloud / Internet Scale computing / Hadoop / Big Data / Algorithms / Architectures etc. ? Contact me - i can help – this is one of my primary expertise.

Brown Dog is working to change that. Recipients in 2013 of a $10 million, five-year award from the National Science Foundation, the UI team recently demonstrated two services to make the contents of uncurated data collections accessible.

The first, called Data Access Proxy (DAP), transforms unreadable files into readable ones by linking together a series of computing and translational operations behind the scenes.

Continue reading the full article/reproduced from GCN

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Gartner lays out its top 10 tech trends for 2015 – ComputerWorld

Published by in From the WWW on October 12th, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Credit: Nemo via Pixabay / Thinkstock.  By Patrick Thibodeau. Computerworld | Oct 7, 2014 12:45 PM PT

Credit: Nemo via Pixabay / Thinkstock. By Patrick Thibodeau. Computerworld | Oct 7, 2014 12:45 PM PT

Here’s the Gartner list for 2015, reproduced from ComputerWorld

1: Computing Everywhere. To Gartner, this simply means ubiquitous access to computing capabilities. Intelligent screens and connected devices will proliferate, and will take many forms, sizes and interaction styles.

2: The Internet of Things (IoT).  IT managers to experiment, get ideas going and empower individuals in IT organizations to develop uses for connected devices and sensors.

3: 3D printing. The technology has been around since 1984, but is now maturing and shipments are on the rise. While consumer 3D printing gets a lot of attention, it’s really the enterprise use that can deliver value.

4: Advanced, Pervasive and Invisible Analytics. Every application is an analytical app today.

5: Context Rich Systems. Knowing the user, the location, what they have done in the past, their preferences, social connections and other attributes all become inputs into applications.

6: Smart Machines. Example, global mining company Rio Tinto which operates autonomous trucks, to show the role smart machines will play.

7: Cloud and Client Computing. This highlights the central role of the cloud. An application will reside in a cloud, and it will be able to span multiple clients.

8: Software Defined Applications and Infrastructure. IT can’t work on hard coded, pre-defined elements; it needs to be able to dynamically assemble infrastructure.

9: Web-Scale IT. This is akin to adopting some of the models used by large cloud providers, including their risk-embracing culture and collaborative alignments.

10: Security. In particular, Gartner envisions more attention to application self-protection.

Here’s the Gartner list for 2015, reproduced from ComputerWorld

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

The Big Data Disruption – HortonWorks

Published by in From the WWW on October 9th, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Apache Hadoop didn’t disrupt the datacenter, the data did.

The explosion of new types of data in recent years – from inputs such as the web and connected devices, or just sheer volumes of records – has put tremendous pressure on the EDW.

Need expertise with Cloud / Internet Scale computing / Hadoop / Big Data / Algorithms / Architectures etc. ? Contact me - i can help – this is one of my primary expertise.


 

ServerLogs.pngSocial Media Data: Win customers’ hearts: With Hadoop, you can mine Twitter, Facebook and other social media conversations for sentiment data about you and your competition, and use it to make targeted, real-time, decisions that increase market share. More »

ServerLogs.pngServer Log Data: Fortify security and compliance: Security breaches happen. And when they do, your server logs may be your best line of defense. Hadoop takes server-log analysis to the next level by speeding and improving security forensics and providing a low cost platform to show compliance.. More »

Clickstream.pngWeb Clickstream Data: Show them the way: How do you move customers on to bigger things—like submitting a form or completing a purchase? Get more granular with customer segmentation. Hadoop makes it easier to analyze, visualize and ultimately change how visitors behave on your website. More »

Sensor.pngMachine and Sensor Data: Gain insight from your equipment: Your machines know things. From out in the field to the assembly line floor—machines stream low-cost, always-on data. Hadoop makes it easier for you to store and refine that data and identify meaningful patterns, providing you with the insight to make proactive business decisions. More »

Geolocation.pngGeolocation Data: Profit from predictive analytics: Where is everyone? Geolocation data is plentiful, and that’s part of the challenge. The costs to store and process voluminous amounts of data often outweigh the benefits. Hadoop helps reduce data storage costs while providing value driven intelligence from asset tracking to predicting behavior to enable optimization.

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Distributed, ‘artificial’ intelligence and machine perception – CARACaS – IEEE Spectrum

Published by in From the WWW, Snippet on October 5th, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Image: U.S. Navy.

Image: U.S. Navy.

A fleet of U.S. Navy boats approached an enemy vessel like sharks circling their prey. The scene might not seem so remarkable compared to any of the Navy’s usual patrol activities, but in this case, part of an exercise conducted by the U.S. Office of Naval Research (ONR), the boats operated without any direct human control: they acted as a robot boat swarm. The tests on Virginia’s James River this past summer represented the first large-scale military demonstration of a swarm of autonomous boats designed to overwhelm enemies. This capability points to a future where the U.S. Navy and other militaries may deploy underwater, surface, and flying robotic vehicles to defend themselves or attack a hostile force. “What’s new about the James River test was having five USVs [unmanned surface vessels] operating together with no humans on board,” said Robert Brizzolara, an ONR program manager.

Read the original/Reproduced from IEEE Spectrum: By Jeremy Hsu

Need expertise with Machine Learning / Internet Scale computing / Hadoop / Big Data / Algorithms / Architectures etc. ? Contact me - i can help – this is one of my primary expertise.

In the test, five robot boats practiced an escort mission that involved protecting a main ship against possible attackers. To command the boats, the Navy use a system called the Control Architecture for Robotic Agent Command and Sensing (CARACaS). The system not only steered the autonomous boats but also coordinated its actions with other vehicles—a larger group of manned and remotely-controlled vessels

A fleet of U.S. Navy boats approached an enemy vessel like sharks circling their prey. The scene might not seem so remarkable compared to any of the Navy’s usual patrol activities, but in this case, part of an exercise conducted by the U.S. Office of Naval Research (ONR), the boats operated without any direct human control: they acted as a robot boat swarm.

The tests on Virginia’s James River this past summer represented the first large-scale military demonstration of a swarm of autonomous boats designed to overwhelm enemies. This capability points to a future where the U.S. Navy and other militaries may deploy underwater, surface, and flying robotic vehicles to defend themselves or attack a hostile force.

“What’s new about the James River test was having five USVs [unmanned surface vessels] operating together with no humans on board,” said Robert Brizzolara, an ONR program manager. In the test, five robot boats practiced an escort mission that involved protecting a main ship against possible attackers. To command the boats, the Navy use a system called the Control Architecture for Robotic Agent Command and Sensing (CARACaS). The system not only steered the autonomous boats but also coordinated its actions with other vehicles—a larger group of manned and remotely-controlled vessels. Brizzolara said the CARACaS system evolved from hardware and software originally used in NASA’s Mars rover program starting 11 years ago. Each robot boat transmits its radar views to the others so the group shares the same situational awareness. They’re also continually computing their own paths to navigate around obstacles and act in a cooperatively manner.

Navy researchers installed the system on regular 7-foot and 11-foot boats and put them through a series of exercises designed to test behaviors such as escort and swarming attack. The boats escorted a manned Navy ship before breaking off to encircle a vessel acting as a possible intruder. The five autonomous boats then formed a protective line between the intruder and the ship they were protecting.

Photo: John F. Williams/U.S. Navy. An unmanned boat operates autonomously during an Office of Naval Research demonstration of swarm boat technology on the James River in Newport News, Va.

Photo: John F. Williams/U.S. Navy. An unmanned boat operates autonomously during an Office of Naval Research demonstration of swarm boat technology on the James River in Newport News, Va.

Such robotic swarm technology could transform modern warfare for the U.S. Navy and the rest of the U.S. military by reducing the risk to human personnel. Smart robots and drones that don’t require close supervision could also act as a “force multiplier” consisting of relatively cheap and disposable forces—engaging more enemy targets and presenting more targets for enemies to worry about.

“Numbers may once again matter in warfare in a way they have not since World War II, when the U.S. and its allies overwhelmed the Axis powers through greater mass,” wrote Paul Scharre, a fellow at the Center for a New American Security, a military research institution in Washington, D.C., in an upcoming report titled “Robotics on the Battlefield Part II: The Coming Swarm.”

“Qualitative superiority will still be important, but may not be sufficient alone to guarantee victory,” Scharre wrote. “Uninhabited systems in particular have the potential to bring mass back to the fight in a significant way by enabling the development of swarms of low-cost platforms.”

The Navy does not have a firm timeline for when such robot swarms could become operational. For now, ONR researchers hope to improve the autonomous system in terms of its ability to “see” its surroundings using different sensing technologies. They also want to improve how the boats navigate autonomously around obstacles, even in the most unexpected situations that human programmers haven’t envisioned. But the decision to have such robot boats open fire upon enemy targets will still rest with human sailors.

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

© all content copyright respective owners
CyberChimps