Home » #recent

#recent

Interesting bits and pieces I stumble upon strolling through CyberSpace. Latest 20 posts shown – follow the links to the Archives sections for a full lists of posts on this site.

You can also have a peek of trends in terms of search interest (via Google) on specific topics.


Reproduced and/or syndicated content. All content and images is copyright the respective owners.

The Internet was delivered to the masses - parallel computing is not far behind - Virginia Tech

Wu Feng

Wu Feng

During the past few years, Virginia Tech’s Wu Feng has built upon a National Science Foundation (NSF) / Microsoft grant from the “Computing in the Cloud” program, and synergistically complemented it with subsequent collaborative grants, including a $6 million award from the Air Force on “big computing” for mini-drones and a $1 million award from NSF and the National Institutes of Health on “big data” for the life sciences.

As he wove together the “parallel computing” aspects from each grant, he was able to tell a much larger, more interconnected story –– one of delivering parallel computing to the masses. In doing so, he has worked to apply this democratization of parallel computing to an area of emerging importance — the promise of personalized medicine.

Read the original/full/ article reproduced from VirginiaTech

Need expertise with cloud / parallel / internet scale computing / mapreduce / hadoop  etc. ? contact me - i can help! - this is a core expertise area.

Microsoft took particular notice of Feng’s leadership in this cutting-edge research and succinctly worked the supercomputing expert’s collaborative ideas into one of its global advertising campaigns, describing Virginia Tech scientists and engineers as “leaders in harnessing supercomputer powers to deliver lifesaving treatments.”

This full-page ad ran this summer in the Washington Post, New York Times, USA Today, Wall Street Journal, Bloomberg Businessweek, United Hemispheres, The Economist, Forbes, Fortune, TIME, Popular Mechanics, and Golf Digest, as well as a host of other venues in Philadelphia, Washington, D.C., and Baltimore.

“Delivering personalized medicine to the masses is just one of the grand challenge problems facing society,” said Feng, the Elizabeth and James E. Turner Fellow in Virginia Tech’s Department of Computer Science. “To accelerate the discovery to such grand challenge problems requires more than the traditional pillars of scientific inquiry, namely theory and experimentation. It requires computing. Computing has become our ‘third pillar’ of scientific inquiry, complementing theory and experimentation. This third pillar can empower researchers to tackle problems previously viewed as infeasible.”

So, if computing faster and more efficiently holds the promise of accelerating discovery and innovation, why can’t we simply build faster and faster computers to tackle these grand challenge problems?

“In short, with the rise of ‘big data’, data is being generated faster than our ability to compute on it,” said Feng. “For instance, next-generation sequencers (NGS) double the amount of data generated every eight to nine months while our computational capability doubles only every 24 months, relative to Moore’s Law. Clearly, tripling our institutional computational resources every eight months is not a sustainable solution… and clearly not a fiscally responsible one either. This is where parallel computing in the cloud comes in.”

As noted by the National Institute of Standards and Technology, cloud computing is “a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”

“The implicit takeaway here is that the configurable computing resources are hosted and maintained by cloud providers such as Microsoft rather than the institution requiring the computing resources. So, rather than having an institution set-up, maintain, and support an information technology infrastructure that is seldom utilized anywhere near its capacity… and having to triple these resources every eight to nine months to keep up with the data deluge of next-generation sequencing, cloud computing is a viable and more cost effective avenue for accessing necessary computational resources on the fly and then releasing them when not needed,” Feng said.

Whether for traditional high-performance computing or cloud computing, Feng is seeking to transform the way that parallel computing systems and environments are designed and the way that people interact with them.

“My analogy would be the Internet, and how it has transformed the way people interact with information,” Feng added. “We need to make a similar transition with parallel computing, whether with the cloud or with traditional high-performance computing such as supercomputers.”

The groundwork for Feng’s big data research in a “cloud” began in the mid-2000s with a multi-institutional effort to identify missing gene annotations in genomes. This effort combined supercomputers from six U.S. institutions into an ad-hoc cloud and generated 0.5 petabytes of data that could only be stored in Tokyo, Japan.

Need expertise with cloud / parallel / internet scale computing / mapreduce / hadoop  etc. ? contact me - i can help! - this is a core expertise area.

Read the original/full/ article reproduced from VirginiaTech

Visual control of big data - MIT News

Visual control of big data. Image: Christine Daniloff/MIT

Visual control of big data. Image: Christine Daniloff/MIT

Data-visualization tool identifies sources of aberrant results and recomputes visualizations without them.

In the age of big data, visualization tools are vital. With a single glance at a graphic display, a human being can recognize patterns that a computer might fail to find even after hours of analysis. But what if there are aberrations in the patterns? Or what if there’s just a suggestion of a visual pattern that’s not distinct enough to justify any strong inferences? Or what if the pattern is clear, but not what was to be expected?

By Larry Hardesty | MIT News Office : August 15, 2014 :

Read the full article / Reproduced from MIT News

The Database Group at MIT’s Computer Science and Artificial Intelligence Laboratory has released a data-visualization tool that lets users highlight aberrations and possible patterns in the graphical display; the tool then automatically determines which data sources are responsible for which.

It could be, for instance, that just a couple of faulty sensors among dozens are corrupting a very regular pattern of readings, or that a few underperforming agents are dragging down a company’s sales figures, or that a clogged vent in a hospital is dramatically increasing a few patients’ risk of infection.

Big data is big business

need expertise with cloud / internet scale computing / mapreduce / hadoop  etc. ? contact me - i can help! - this is a core expertise area.

Visualizing big data is big business: Tableau Software, which sells a suite of visualization tools, is a $4 billion company. But in creating attractive, informative graphics, most visualization software discards a good deal of useful data.

“If you look at the way people traditionally produce visualizations of any sort, they would have some big, rich data set — that has maybe hundreds of millions of data points, or records — and they would do some reduction of the set to a few hundred or thousands of records at most,” says Samuel Madden, a professor of computer science and engineering and one of the Database Group’s leaders. “The problem with doing that sort of reduction is that you lose information about where those output data points came from relative to the input data set. If one of these data points is crazy — is an outlier, for example — you don’t have any real ability to go back to the data set and ask, ‘Where did this come from and what were its properties?’”

That’s one of the problems solved by the new visualization tool, dubbed DBWipes. For his thesis work, Eugene Wu, a graduate student in electrical engineering and computer science who developed DBWipes with Madden and adjunct professor Michael Stonebraker, designed a novel “provenance tracking” system for large data sets.

If a visualization system summarizes 100 million data entries into 100 points to render on the screen, then each of the 100 points will in some way summarize — perhaps by averaging — 1 million data points. Wu’s provenance-tracking system provides a compact representation of the source of the summarized data so that users can easily trace visualized data back to the source — and conversely, track source data to the pixels that are rendered by it.

The idea of provenance tracking is not new, but Wu’s system is particularly well suited to the task of tracking down outliers in data visualizations. Rather than simply telling the user the million data entries that were used to compute the outliers, it first identifies those that most influenced the outlier values, and summarizes those data entries in human readable terms.

Best paper

Wu and Madden’s work on their “Scorpion” algorithm was selected as one of the best papers of the Very Large Database conference last year. The algorithm tracks down the records responsible for particular aspects of a DBWipes visualization and then efficiently recalculates the visualization to either exclude or emphasize the data they contain.

. . .  Continue reading the full article at / Reproduced from MIT News

Programming Language That Accommodates Multiple Languages in Same Program - CMU

Wyvern Language Protects Computers From Code Injection Attacks. Contact: Byron Spice / 412-268-9068 / spice@cs.cmu.edu

Wyvern Language Protects Computers From Code Injection Attacks. Contact: Byron Spice / 412-268-9068 / spice@cs.cmu.edu

PITTSBURGH—Computer scientists at Carnegie Mellon University have designed a way to safely use multiple programming languages within the same program, enabling programmers to use the language most appropriate for each function while guarding against code injection attacks, one of the most severe security threats in Web applications today.

A research group led by Jonathan Aldrich, associate professor in the Institute for Software Research (ISR), is developing a programming language called Wyvern that makes it possible to construct programs using a variety of targeted, domain-specific languages, such as SQL for querying databases or HTML for constructing Web pages, as sublanguages, rather than writing the entire program using a general purpose language.

Read the original article/reproduced from Carnegie Mellon University

Wyvern determines which sublanguage is being used within the program based on the type of data that the programmer is manipulating. Types specify the format of data, such as alphanumeric characters, floating-point numbers or more complex data structures, such as Web pages and database queries.

The type provides context, enabling Wyvern to identify a sublanguage associated with that type in the same way that a person would realize that a conversation about gourmet dining might include some French words and phrases, explained Joshua Sunshine, ISR systems scientist.

“Wyvern is like a skilled international negotiator who can smoothly switch between languages to get a whole team of people to work together,” Aldrich said. “Such a person can be extremely effective and, likewise, I think our new approach can have a big impact on building software systems.”

Many programming tasks can involve multiple languages; when building a Web page, for instance, HTML might be used to create the bulk of the page, but the programmer might also include SQL to access databases and JavaScript to allow for user interaction. By using type-specific languages, Wyvern can simplify that task for the programmer, Aldrich said, while also avoiding workarounds that can introduce security vulnerabilities.
One common but problematic practice is to paste together strings of characters to form a command in a specialized language, such as SQL, within a program. If not implemented carefully, however, this practice can leave computers vulnerable to two of the most serious security threats on the Web today — cross-site scripting attacks and SQL injection attacks. In the latter case, for instance, someone with knowledge of computer systems could use a login/password form or an order form on a Web site to type in a command to DROP TABLE that could wipe out a database.

“Wyvern would make the use of strings for this purpose unnecessary and thus eliminate all sorts of injection vulnerabilities,” Aldrich said.

Previous attempts to develop programming languages that could understand other languages have faced tradeoffs between composability and expressiveness; they were either limited in their ability to unambiguously determine which embedded language was being used, or limited in which embedded languages could be used.

“With Wyvern, we’re allowing you to use these languages, and define new ones, without worrying about composition,” said Cyrus Omar, a Ph.D. student in the Computer Science Department and the lead designer of Wyvern’s type-specific language approach.
Wyvern is not yet fully engineered, Omar noted, but is an open source project that is ready for experimental use by early adopters. More information is available at http://www.cs.cmu.edu/~aldrich/wyvern/. A research paper, “Safely Composable Type-Specific Languages,” by Omar, Aldrich, Darya Kurilova, Ligia Nistor and Benjamin Chung of CMU and Alex Potanin of Victoria University of Wellington, recently won a distinguished paper award at the European Conference on Object-Oriented Programming in Uppsala, Sweden.

This research was supported in part by the Air Force Research Laboratory, the National Security Agency and the Royal Society of New Zealand Marsden Fund.
The Institute for Software Research and Computer Science Department are part of Carnegie Mellon’s top-ranked School of Computer Science, which is celebrating its 25th year. Follow the school on Twitter @SCSatCMU.

The authors of “Safely Composable Type-Specific Languages” recently won a distinguished paper award at the European Conference on Object-Oriented Programming in Uppsala, Sweden. Pictured (left to right) at the conference are Benjamin Chung, Cyrus Omar, Jonathan Aldrich and Alex Potanin. Not shown: Darya Kurilova, Ligia Nistor.

Read the original article/reproduced from Carnegie Mellon University

Google's big-data tool, Mesa, holds petabytes of data across multiple servers - Computerworld

BigData

BigData

DG News Service – Google has found a way to stretch a data warehouse across multiple data centers, using an architecture its engineers developed that could pave the way for much larger, more reliable and more responsive cloud-based analysis systems. Google researchers will discuss the new technology, called Mesa, at the Conference on Very Large Data Bases, happening next month in Hangzhou, China. A Mesa implementation can hold petabytes of data, update millions of rows of data per second and field trillions of queries per day, Google says. Extending Mesa across multiple data centers allows the data warehouse to keep working even if one of the data centers fails.

Mesa can also field millions of updates and queries per day. By Joab Jackson:  August 8, 2014 02:55 PM ET. Read the full article at/reproduced from ComputerWorld

need expertise with cloud / internet scale computing / mapreduce / hadoop  etc. ? contact me - i can help! - this is a core expertise area.

Google built Mesa to store and analyze critical measurement data for its Internet advertising business, but the technology could be used for other, similar data warehouse jobs, the researchers said.

“Mesa ingests data generated by upstream services, aggregates and persists the data internally, and serves the data via user queries,” the researchers wrote in a paper describing Mesa.

For Google, Mesa solved a number of operational issues that traditional enterprise data warehouses and other data analysis systems could not.

For one, most commercial data warehouses do not continuously update the data sets, but more typically update them once a day or once a week. Google needed its streams of new data to be analyzed as soon as they were created.

Google also needed a strong consistency for its queries, meaning a query should produce the same result from the same source each time, no matter which data center fields the query.

Consistency is typically considered a strength of relational database systems, though relational databases can have a hard time ingesting petabytes of data. It’s especially hard if the database is replicated across multiple severs in a cluster, which enterprises do to boost responsiveness and uptime. NoSQL databases, such as Cassandra, can easily ingest that much data, but Google needed a greater level of consistency than these technologies can typically offer.

The Google researchers said that no commercial or existing open-source software was able to meet all of its requirements, so they created Mesa.

Mesa relies on a number of other technologies developed by the company, including the Colossus distributed file system, the BigTable distributed data storage system and the MapReduce data analysis framework. To help with consistency, Google engineers deployed a homegrown technology called Paxos, a distributed synchronization protocol.

In addition to scalability and consistency, Mesa offers another advantage in that it can run be run on generic servers, which eliminates the need for specialized, expensive hardware. As a result, Mesa can be run as a cloud service and easily scaled up or down to meet the job requirements.

Read the full article at/reproduced from ComputerWorld

Black Hat 2014: A New Smartcard Hack - IEEE Spectrum

Photo: Getty Images

Photo: Getty Images

According to new research, chip-based “Smartcard” credit and debit cards—the next-generation replacement for magnetic stripe cards—are vulnerable to unanticipated hacks and financial fraud. Stricter security measures are needed, the researchers say, as well as increased awareness of changing terms-of-service that could make consumers bear more of the financial brunt for their hacked cards.

By Mark Anderson. Read the original, full article – reproduced from – from IEEE Spectrum

The work is being presented at this week’s Black Hat 2014 digital security conference in Las Vegas. Ross Anderson, professor of security engineering at Cambridge University, and co-authors have been studying the so-called Europay-Mastercard-Visa (EMV) security protocols behind emerging Smartcard systems.

Though the chip-based EMV technology is only now being rolled out in North America, India, and elsewhere, it has been in use since 2003 in the UK and in more recent years across continental Europe as well. The history of EMV hacks and financial fraud in Europe, Anderson says, paints not nearly as rosy a picture of the technology as its promoters may claim.

“The idea behind EMV is simple enough: The card is authenticated by a chip that is much more difficult to forge than the magnetic strip,” Anderson and co-author Steven Murdoch wrote in June in the Communications of the ACM [PDF]. “The card-holder may be identified by a signature as before, or by a PIN… The U.S. scheme is a mixture, with some banks issuing chip-and-PIN cards and others going down the signature route. We may therefore be about to see a large natural experiment as to whether it is better to authenticate transactions with a signature or a PIN. The key question will be, “Better for whom?””

Neither is ideal, Anderson says. But signature-based authentication does put a shared burden of security on both bank and consumer and thus may be a fairer standard for consumers to urge their banks to adopt.

“Any forged signature will likely be shown to be a forgery by later expert examination,” Anderson wrote in his ACM article. “In contrast, if the correct PIN was entered the fraud victim is left in the impossible position of having to prove that he did not negligently disclose it.”

And PIN authentication schemes, Anderson says, have a number of already discovered vulnerabilities, a few of which can be scaled up by professional crooks into substantial digital heists.

Continue reading the full article from IEEE Spectrum

Mobile Gadgets That Connect to Wi-Fi without a Battery - MIT Review

Air power: This antenna harvests signals from TV, radio, and cellular transmissions so that small Wi-Fi devices can get by without batteries.

Air power: This antenna harvests signals from TV, radio, and cellular transmissions so that small Wi-Fi devices can get by without batteries.

A new breed of mobile wireless device lacks a battery or other energy storage, but it can still send data over Wi-Fi. These prototype gadgets, developed by researchers at the University of Washington, get all the power they need by making use of the Wi-Fi, TV, radio, and cellular signals that are already in the air.

The technology could free engineers to extend the tendrils of the Internet and computers into corners of the world they don’t currently reach. Battery-free devices that can communicate could make it much cheaper and easier to widely deploy sensors inside homes to take control of heating and other services.

Read the full article from MIT Review - Simple devices that can link up via Wi-Fi but don’t need batteries could make it easier to spread computing throughout your home. By Tom Simonite on August 1, 2014

Smart thermostats on the market today, such as the Nest, are limited by the fact that they can sense temperature only in their immediate location. Putting low-cost, Wi-Fi-capable, and battery-free sensors behind couches and cabinets could provide the detailed data needed to make such thermostats more effective. “You could throw these things wherever you want and never have to think about them again,” says Shyam Gollakota, an assistant professor at the University of Washington who worked on the project.

The battery-free Wi-Fi devices are an upgrade to a design the same group demonstrated last year—those devices could only talk to other devices like themselves (see “Devices Connect with Borrowed TV Signals and Need No Power Source”). Versions were built that could power LEDs, motion detectors, accelerometers, and touch-sensitive buttons.

Adding Wi-Fi capabilities makes the devices more practical. Gollakota hopes to establish a company to commercialize the technology, which should also be applicable to other wireless protocols, such as Zigbee or Bluetooth, that are used in compact devices without access to wired power sources, he says. A paper on the new devices will be presented at the ACM Sigcomm conference in Chicago in August.

Engineers have worked for decades on ways to generate power by harvesting radio signals from the air, a ubiquitous resource thanks to radio, TV, and cellular network transmitters. But although enough energy can be collected that way to run low-powered circuits, the power required to actively transmit data is significantly higher. Harvesting ambient radio waves can collect on the order of tens of microwatts of power. But sending data over Wi-Fi requires at least tens of thousands of times more power—hundreds of milliwatts at best and typically around one watt of power, says Gollakota.

The Washington researchers got around that challenge by finding a way to have the devices communicate without having to actively transmit. Their devices send messages by scattering signals from other sources—they recycle existing radio waves instead of expending energy to generate their own.

To send data to a smartphone, for example, one of the new prototypes switches its antenna back and forth between modes that absorb and reflect the signal from a nearby Wi-Fi router. Software installed on the phone allows it to read that signal by observing the changing strength of the signal it detects from that same router as the battery-free device soaks some of it up.

Continue reading the rest of the article from MIT Review

Google Explains How It Forgets - IEEE Spectrum

Photo: Michael Gottschalk/Getty Images

Photo: Michael Gottschalk/Getty Images

Google can forget, but unlike the rest of us, the process is not automatic. Yesterday Google told a European government data protection working party how it handles requests for search result link removals. The removals began in June after a May European court ruling (see our coverage) upholding a Spanish man’s right to be forgotten. The working group had earlier sent Google a questionnaire on the practicalities of the removals and met with Google and two other unnamed U.S. search engines. Google’s reply revealed that it is handling the requests on a case-by-case basis, with decisions resting on recently-hired staff. Companies that help individuals request link removals have begun receiving rejections, The New York Times reported.

By Lucas Laursen - Posted 4 Aug 2014 | 13:00 GMT

Read the full article / reproduced from IEEE Spectrum

The cover letter said that the company’s “approach will not be static” and that it expects to be in dialogue with data protection authorities. It spelled out the criteria by which staff decide whether to honor an individual’s link removal request. These hew close to those set out in the May court ruling.

The company noted some complications it has encountered, such as the fact that different EU countries have different policies on publishing full names in court documents. As anyone who has used Google News has discovered, the company also finds it difficult to establish what sort of online media count as “reputable” news organizations.

Googled also laid out its policy of alerting users that name-containing searches may have had results modified by legal action:

“The notification is intended to alert users to the possibility that their results for this kind of query may have been affected by a removal, but not to publicly reveal which queries were actually affected.”
Perhaps the most illuminating passage was the admission that there is not yet a good way to convert the court’s order into a computer algorithm for filtering the public interest from the private:

“We are not automating decisions about these removals. We have to weigh each request individually on its merits, and that is done by people. We have many people working full time on the process, and ensuring enough resources are available for the processing of requests required a significant hiring effort.”

Some numbers help put that effort in perspective: the company received about 91,000 requests in the first 7 weeks the relevant form was available and is still working on the backlog. So far, it has approved 53 percent of requests, rejected 32 percent with an explanation of the reason why, and requested further information in 15 percent of the cases. It has also reversed some of its decisions already, in some high-profile cases involving the newspaper The Guardian. A UK House of Lords subcommittee recently called the court’s criteria “vague, ambiguous and unhelpful,” the BBC reported.

Read the full article / reproduced from IEEE Spectrum

Historic EDSAC circuit diagrams rescued from pile for trash - PHYS.org

Maurice Wilkes and Bill Renwick in front of the complete EDSAC. Credit: Wikipedia/CC BY 2.0

Maurice Wilkes and Bill Renwick in front of the complete EDSAC. Credit: Wikipedia/CC BY 2.0

(Phys.org) —A discovery of diagrams is helping to reconstruct a historically significant computer, named EDSAC. Nineteen detailed circuit diagrams sitting in a corridor pile destined for the scrap heap were recently discovered, rescued, and handed over to a project team who are trying to achieve a historical milestone in better understanding the earlier history of computing. and gaining insights about how EDSAC was built and to get in tune with the EDSAC engineers’ thinking. The EDSAC Replica Project aim is to build a working replica of this Cambridge University-built computer, which ran its first program in 1949, widely acknowledged as the world’s first practical, general purpose computer. EDSAC stands for Electronic Delay Storage Automatic Calculator.

Andrew Herbert is leading the reconstruction project. He said the EDSAC is important as the first computer built for other people to use to solve real problems that previously had to be solved using hand calculators or paper and pencil methods. EDSAC made a dramatic difference in the speed in which they could do calculations. The pioneering computer, designed by Sir Maurice Wilkes, helped scientists analyze data. The EDSAC contained some 3000 valves arranged in 12 racks and consumed about 12Kw of power. While there had been special purpose computers before EDSAC, for example, for code-breaking and artillery shell calculations, EDSAC became known as the world’s first general purpose computer.

. . . .

Continue reading the full article / reproduced from PHYS.org

Can Winograd Schemas Replace Turing Test for Defining Human-Level AI? - IEEEE Spectrum

Illustration: Getty Images

Illustration: Getty Images

Earlier this year, a chatbot called Eugene Goostman “beat” a Turing Test for artificial intelligence as part of a contest organized by a U.K. university. Almost immediately, it became obvious that rather than proving that a piece of software had achieved human-level intelligence, all that this particular competition had shown was that a piece of software had gotten fairly adept at fooling humans into thinking that they were talking to another human, which is very different from a measure of the ability to “think.” (In fact, some observers didn’t think the bot was very clever at all.) 

By Evan Ackerman: Posted 

Read the full article from IEEE Spectrum

Clearly, a better test is needed, and we may have one, in the form of a type of question called a Winograd schema that’s easy for a human to answer, but a serious challenge for a computer.

The problem with the Turing Test is that it’s not really a test of whether an artificial intelligence program is capable of thinking: it’s a test of whether an AI program can fool a human. And humans are really, really dumb. We fall for all kinds of tricks that a well-programmed AI can use to convince us that we’re talking to a real person who can think.

For example, the Eugene Goostman chatbot pretends to be a 13-year-old boy, because 13-year-old boys are often erratic idiots (I’ve been one), and that will excuse many circumstances in which the AI simply fails. So really, the chat bot is not intelligent at all—it’s just really good at making you overlook the times when it’s stupid, while emphasizing the periodic interactions when its algorithm knows how to answer the questions that you ask it.

Conceptually, the Turing Test is still valid, but we need a better practical process for testing artificial intelligence. A new AI contest, sponsored by Nuance Communications and CommonsenseReasoning.org, is offering a US $25,000 prize to an AI that can successfully answer what are called Winograd schemas, named after Terry Winograd, a professor of computer science at Stanford University.

Here’s an example of one:

The trophy doesn’t fit in the brown suitcase because it is too big. What is too big?

The trophy, obviously. But it’s not obvious. It’s obvious to us, because we know all about trophies and suitcases. We don’t even have to “think” about it; it’s almost intuitive. But for a computer program, it’s unclear what the “it” refers to. To be successful at answering a question like this, an artificial intelligence must have some background knowledge and the ability to reason.

Here’s another one:

Jim comforted Kevin because he was so upset. Who was upset?

These are the rules the Winograd schemas have to follow:

1. Two parties are mentioned in a sentence by noun phrases. They can be two males, two females, two inanimate objects or two groups of people or objects.

2. A pronoun or possessive adjective is used in the sentence in reference to one of the parties, but is also of the right sort for the second party. In the case of males, it is “he/him/his”; for females, it is “she/her/her”; for inanimate object it is “it/it/its”; and for groups it is “they/them/their.”

3. The question involves determining the referent of the pronoun or possessive adjective. Answer 0 is always the first party mentioned in the sentence (but repeated from the sentence for clarity), and Answer 1 is the second party.

4. There is a word (called the special word) that appears in the sentence and possibly the question. When it is replaced by another word (called the alternate word), everything still makes perfect sense, but the answer changes.

For more details (including some examples of ways in which certain Winograd schemas can include clues that an AI could exploit), this paper is easy to understand and well worth reading.

. . . .

Continue reading the full article at IEEE Spectrum

Lack of coding skills may lead to skills shortage in Europe - ComputerNews

The European Commission (EC) is urging people to learn coding this Summer, warning that a lack of basic coding skills could result in Europe facing a shortage of up to 900,000 ICT professionals by 2020.

Coding is the literacy of today and key to enable the digital revolution, according to European Commission vice president for Digital Agenda, Neelie Kroes, and commissioner for education, culture, multilingualism and youth, Androulla Vassiliou.

ARTUR MARCINIEC - FOTOLIA

ARTUR MARCINIEC – FOTOLIA

Programming is everywhere and fundamental to the understanding of a hyper-connected world, the EC has said.

Read the full article from ComputerNews

According to the Commission, more than 90% of professional occupations require some ICT competence. But the number of graduates in computer science is not keeping pace with this demand for skills. As a result, many open vacancies for ICT practitioners cannot be filled, despite the high level of unemployment in Europe, warned Kroes and Vassiliou.

“If we do not appropriately address this issue at a European and national level, we may face a skills shortage of up to 900,000 ICT professionals by 2020. The share of women in choosing technical careers is also alarmingly low. Coding is a way to attract girls to choose tech careers,” they said.

Kroes and Vassiliou have sent a joint letter to EU Education Ministers urging them to encourage children to get involved in EU Code week which takes place across Europe in October (11-17 October 2014).

Basic coding skills will become crucial for many jobs in the near future as we move to a more cloud-based and connected devices world.

Coding is at the heart of technology. Each and every interaction between humans and computers is governed by code. Programming skills are fundamental for tasks ranging from creating web apps, enabling online shopping, and optimising GPS software through to sifting through LHC data for the Higgs Boson particle, simulating the formation of stars or simulating the neuronal pathways in the brain, according to the organisers of EU Code Week.

Continue reading the full article from ComputerNews

BlackForest Aggregates Threat Information to Warn of Possible Cyber Attacks

Georgia Tech Research Institute (GTRI) cyber-security specialists Christopher Smoak, Bryan Massey and Ryan Spanier (l-r) pose in facilities used to gather information on the activities of hackers and pending cyber attacks. GTRI has developed a new open source intelligence gathering system known as BlackForest. (Credit: Gary Meek)

Georgia Tech Research Institute (GTRI) cyber-security specialists Christopher Smoak, Bryan Massey and Ryan Spanier (l-r) pose in facilities used to gather information on the activities of hackers and pending cyber attacks. GTRI has developed a new open source intelligence gathering system known as BlackForest. (Credit: Gary Meek)

Coordinating distributed denial-of-service attacks, displaying new malware code, offering advice about network break-ins and posting stolen information – these are just a few of the online activities of cyber-criminals. Fortunately, activities like these can provide cyber-security specialists with advance warning of pending attacks and information about what hackers and other bad actors are planning. Gathering and understanding this cyber-intelligence is the work of BlackForest, a new open source intelligence gathering system developed by information security specialists at the Georgia Tech Research Institute (GTRI). By using such information to create a threat picture, BlackForest complements other GTRI systems designed to help corporations, government agencies and nonprofit organizations battle increasingly-sophisticated threats to their networks. Read the original article / reproduced from Georgia Tech Research “BlackForest is on the cutting edge of anticipating attacks that may be coming,” said Christopher Smoak, a research scientist in GTRI’s Emerging Threats and Countermeasures Division. “We gather and connect information collected from a variety of sources to draw conclusions on how people are interacting. This can drive development of a threat picture that may provide pre-attack information to organizations that may not even know they are being targeted.”

need expertise with machine learning / cloud / internet scale computing / mapreduce / hadoop  etc. ? contact me - i can help! - this is a core expertise area.

The system collects information from the public Internet, including hacker forums and other sites where malware authors and others gather. Connecting the information and relating it to past activities can let organizations know they are being targeted and help them understand the nature of the threat, allowing them to prepare for specific types of attacks. Once attacks have taken place, BlackForest can help organizations identify the source and mechanism so they can beef up their security. Organizing distributed denial-of-service (DDoS) attacks is a good example of how the system can be helpful, Smoak noted. DDoS attacks typically involve thousands of people who use the same computer tool to flood corporate websites with so much traffic that customers can’t get through. The attacks hurt business, harm the organization’s reputation, bring down servers – and can serve as a diversion for other types of nefarious activity.

Georgia Tech Research Institute (GTRI) cyber-security specialists Ryan Spanier, Christopher Smoak, and Bryan Massey (l-r) pose in facilities used to gather information on the activities of hackers and pending cyber attacks. GTRI has developed a new open source intelligence gathering system known as BlackForest. (Credit: Gary Meek)

Georgia Tech Research Institute (GTRI) cyber-security specialists Ryan Spanier, Christopher Smoak, and Bryan Massey (l-r) pose in facilities used to gather information on the activities of hackers and pending cyber attacks. GTRI has developed a new open source intelligence gathering system known as BlackForest. (Credit: Gary Meek)

But they have to be coordinated using social media and other means to enlist supporters. BlackForest can tap into that information to provide a warning that may allow an organization to, for example, ramp up its ability to handle large volumes of traffic. “We want to provide something that is predictive for organizations,” said Ryan Spanier, head of GTRI’s Threat Intelligence Branch. “They will know that if they see certain things happening, they may need to take action to protect their networks.” Malware authors often post new code to advertise its availability, seek feedback from other writers and mentor others. Analyzing that code can provide advance warning of malware innovations that will need to be addressed in the future. “If we see a tool pop up written by a person who has been an important figure in the malware community, that lets us know to begin working to mitigate the new malware that may appear down the road,” Smoak said. Organizations also need to track what’s being made available in certain forums and websites. When a company’s intellectual property starts showing up online, that may be the first sign that a network has been compromised. Large numbers of credit card numbers, or logins and passwords, can show that a website or computer system of a retail organization has been breached. “You have to monitor what’s out in the wild that your company or organization owns,” said Spanier. “If you have something of value, you will be attacked. Not all attacks are successful, but nearly all companies have some computers that have been compromised in one way or another. You want to find out about these as soon as possible.”

. . . .

Continue reading the full article from  Georgia Tech Research

Media Relations Contacts: Lance Wallace (404-407-7280) (lance.wallace@gtri.gatech.edu) or John Toon (404-894-6986) (jtoon@gatech.edu).

Writer: John Toon

Tor Project makes efforts to debug dark web - BBC News

THINKSTOCK - Security researchers claimed to have found a way to reveal Tor users' identities

THINKSTOCK – Security researchers claimed to have found a way to reveal Tor users’ identities

The co-creator of a system designed to make internet users unidentifiable says he is tackling a “bug” that threatened to undermine the facility.

The Tor (the onion router) network was built to allow people to visit webpages without being tracked and to publish sites whose contents would not show up in search engines.

Earlier this month two researchers announced plans to reveal a way to de-anonymise users of this “dark web”.

They were later prevented from talking.

Alexander Volynkin and Michael McCord – two security experts from Carnegie Mellon University’s computer emergency response team (Cert) – had been scheduled to reveal their findings at the Black Hat conference in Las Vegas in August.

However, a notice published on the event’s website now states that the organisers had been contacted by the university’s lawyers to say the talk had been called off.

“Unfortunately, Mr Volynkin will not be able to speak at the conference since the materials that he would be speaking about have not yet [been] approved by Carnegie Mellon University/Software Engineering Institute for public release,” the message said.

Continue reading the full article from BBC.com

Built for speed: Designing exascale computers - Harvard School Of Engineering and Applied Sciences

“Imagine a heart surgeon operating to repair a blocked coronary artery.

Someday soon, the surgeon might run a detailed computer simulation of blood flowing through the patient’s arteries, showing how millions of red blood cells jostle and tumble through the small vessels. The simulation would identify the best repair strategy. With a fast enough computer, it could all be done in a few minutes, while the operation is under way.”

THE SUMMER 2014 ISSUE OF TOPICS EXAMINES PROGRESS IN SUPERCOMPUTING. July 22, 2014. By Brian Hayes

Continue reading the rest of the article from Harvard School Of Engineering and Applied Sciences

EU funds project to boost European cloud computing market - ComputerWeekly

EVERYTHINGPOSSIBLE - FOTOLIA

EVERYTHINGPOSSIBLE – FOTOLIA

A European Union-funded project called Cloudcatalyst has been set up to assess the current cloud computing market in Europe, identify barriers to cloud adoption and provide tools to boost its growth in the region.

The project aims to instill confidence in European businesses, public entities, ICT providers and other cloud stakeholders eager to develop and use cloud services.

Read the original article /reproduced from ComputerWeekly

need expertise with cloud / internet scale computing / mapreduce / hadoop  etc. ? contact me - i can help! - this is a core expertise area.

It will create “a strong and enthusiastic community of cloud adopters and supporters in Europe”, according to Cordis, the European Commission’s project funding arm.

According to the EC, cloud computing is a “revolution” but its providers are still struggling to captivate and build trust among businesses and everyday citizens. “Cloud-sceptics” are concerned over data security and legal exposure and a lack of information around cloud is hindering its adoption.

The Cloudcatalyst project will tackle this issue by providing useful tools to foster the adoption of cloud computing in Europe and to boost the European cloud market, according to Cordis, the European Commission’s primary public repository that gives information about EU-funded projects.

The project, which is funded by FP7 – the 7th Framework Programme for Research and Technological Development – will target all cloud players. These include software developers, members of the scientific community developing and deploying cloud computing services, incubators at the local, national and European levels, large industries, SMEs, startups and entrepreneurs.

With a total budget of over €50bn, the project will primarily analyse practices across Europe and identify the conditions for a successful adoption.

“We will cover all the main issues around cloud and give a clear overview on a number of topics, such as current cloud trends, critical success factors to overcome major technical barriers, data privacy and compliance requirements, and recommendations for quality of service and cloud SLA,” said Dalibor Baskovc, vice-president at EuroCloud Europe, one of the project partners.

We see cloud as an engine of change and a central ingredient for innovation in Europe

Francisco Medeiros, European Commission

The project will also create a series of tools to help stakeholders create value-added cloud products and services. These consist of the Cloud Accelerator Toolbox and the Go-to-the-Cloud service platform – a collection of management tools bundling together trend analysis, use cases and practical recommendations in the form of printable report templates and instructional videos.

“The tools we are developing will help companies adopt and deploy cloud solutions, whatever their different needs and requirements are,” said Baskovc.

The project will also carry out a number of market surveys to gather key information and produce an overview of the cloud adoption status, such as why companies should develop cloud services, the main internal problems in adopting a cloud product, the associated risks and how these issues can be addressed.

According to the European Commission, cloud computing has the potential to employ millions in Europe by 2020.

“We see cloud as an engine of change and a central ingredient for innovation in Europe,” Francisco Medeiros, deputy head of unit, software and services, cloud computing at the European Commission told the Datacentres Europe 2014 audience in May this year. “Cloud is one of the fastest-growing markets in Europe.”

In 2013, worldwide hardware products grew by 4.2% to €401bn, while software and services grew by 4.5% to €877bn, signifying the importance of software services, said Medeiros.

need expertise with cloud / internet scale computing / mapreduce / hadoop  etc. ? contact me - i can help! - this is a core expertise area.

Read the original article /reproduced from ComputerWeekly

Debunking five big HTML5 myths

HTML 5 - http://www.w3.org/

HTML 5 – http://www.w3.org/

The ongoing discussion about the “readiness” of HTML5 is based on a lot of false assumptions. These lead to myths about HTML5 that get uttered once and then continuously repeated – a lot of times without checking their validity at all.

Reproduced from/read the original at Telefonica

Guest post from Christian Heilmann, Principal Developer Evangelist at Mozilla for HTML5 and open web

HTML5 doesn’t perform?

The big thing everybody wants to talk about when it comes to the problems with HTML5 is… performance. The main problem here is that almost every single comparison misses the fact that you are comparing apples and pears (no pun intended).

Comparing an HTML5 application’s performance with a native App is like comparing a tailored suit with one bought in a shop. Of course the tailored suit will fit you like a glove and looks amazing, but if you ever want to sell it or hand it over to someone else you are out of luck. It  just won’t be the same for the next person.

That is what native Apps are – they are built and optimized for one single environment and purpose and are fixed in their state – more on that later.

HTML5, on the other hand by its very definition is a web technology that should run independent of environment, display or technology. It has to be as flexible as possible in order to be a success on the web.

In its very definition the web is for everybody, not just for a small group of lucky people who can afford a very expensive piece of hardware and are happy to get locked into a fixed environment governed by a single company.

Native applications need to be written for every single device and every new platform from scratch whereas an HTML5 App allows you to support mobiles, tablets and desktops with the same product. Instead of having fixed dimensions and functionality an HTML5 App can test what is supported and improve the experience for people on faster and newer devices whilst not locking out others that can not buy yet another phone.

Native Apps on the other hand do in a lot of cases need an upgrade and force the end user to buy new hardware or they’ll not get the product at all. From a flexibility point of view, HTML5 Apps perform admirably whilst native applications make you dependent on your hardware and leave you stranded when there is an upgrade you can’t afford or don’t want to make. A great example of this is the current switch from Apple to their own maps on iOS. Many end users are unhappy and would prefer to keep using Google Maps but can not.

Seeing that HTML5 is perfectly capable on Desktop to exceed in performance, from scrolling performance to analyzing and changing video on the fly up to running full 3D games at a very high frame rate and have high speed racing games we have to ask ourselves where the problem with its performance lies.

The answer is hardware access. HTML5 applications are treated by mobile hardware developed for iOS and Android as second class citizens and don’t get access to the parts that allow for peak performance. A web view in iOS is hindered by the operating system to perform as fast as a native App although it uses the same principles. On Android both Chrome and Firefox show how fast browsers can perform whereas the stock browser crawls along in comparison.

The stock browser on Android reminds us of the Internet Explorer of the 90s which threatened to be set in stone for a long time and hinder the world wide web from evolving – the very reason Mozilla and Firefox came into existence.

In essence HTML5 is a Formula 1 car that has to drive on a dirt road whilst dragging a lot of extra payload given to it by the operating system without a chance to work around that – for now.

HTML5 cannot be monetized?

HTML5 is a technology stack based on open web technologies. Saying that HTML5 has no monetization model is like saying the web can not be monetized (which is especially ironic when this is written on news sites that show ads).

Whilst on the first glance a closed App-market is a simple way to sell your products there is a lot of hype about their success and in reality not many developers manage to make a living with a single app on closed App markets. As discovery and find-ability is getting increasingly harder in App markets a lot of developers don’t build one App but hundreds of the same App (talking dog, talking cat, talking donkey…) as it is all about being found quickly and being on the first page of search results in the market.

This is where closed App markets with native Apps are a real disadvantage for developers: Apps don’t have an address on the web (URL) and can not be found outside the market. You need to manually submit each of the Apps in each of the markets, abide to their review and submission process and can not update your App easily without suffering outages in your offering.

An HTML5 App is on the web and has a URL, it can also get packaged up with products like Adobe PhoneGap to become a native application for iOS or Android. The other way around is not possible.

In the long term that begs the question what is the better strategy for developers: betting on one closed environment that can pull your product any time it wants or distributing over a world-wide, open distribution network and cover the closed shops as well?

Many apps in the Android and iOS store are actually HTML5 and got converted using PhoneGap. The biggest story about this was the Financial Times releasing their app as HTML5 and making a better profit than with the native one. And more recently the New York Times announced it was following suit with its Web app.

HTML5 cannot be used offline?

As HTML5 is a web technology stack the knee-jerk reaction is thinking that you would have to be online all the time to use them. This is plain wrong. There are many ways to store content offline in a HTML5 application. The simplest way is the Web Storage API which is supported across all modern browsers (excluding Opera mini which is a special case as it sends content via a cloud service and has its own storage tools).

You can also store the application itself offline using AppCache which is supported by all but Internet Explorer. If you have more complex data to store than Web Storage provides you can use either IndexedDB (for Chrome and Firefox) or WebSQL (for iOS and Safari). To work around the issues there are libraries like Lawnchair available to make it easy for developers to use.

HTML5 has no development environment?

One concern often mentioned is that HTML5 lacks in tooling for developers. Strangely enough you never hear that argument from developers but from people who want to buy software to make their developers more effective instead of letting them decide what makes them effective.

HTML5 development at its core is web development and there is a quite amazingly practical development environment for that available. Again, the main issue is a misunderstanding of the web.

You do not build a product that looks and performs the same everywhere – this would rob the web of its core strengths. You build a product that works for everybody and excels on a target platform. Therefore your development environment is a set of tools, not a single one doing everything for you. Depending on what you build you choose to use many of them or just one.

The very success of the web as a media is based on the fact that you do not need to be a developer to put content out – you can use a blogging platform, a CMS or even a simple text editor that comes with your operating system to start your first HTML page. As you progress in your career as a developer you find more and more tools you like and get comfortable and effective with but there is no one tool to rule them all.

Some developers prefer IDEs like Visual Studio, or Eclipse. Others want a WYSIWYG style editor like Dreamweaver but the largest part of web developers will have a text editor or other of some sorts. From Sublime Text, Notepad++ up to VIM or emacs on a Linux computer, all of these are tools that can be used and are used by millions of developers daily to build web content.

When it comes to debugging and testing web developers are lucky these days as the piece of software our end users have to see what we build – the browser – is also the debugging and testing environment. Starting with Firefox having Firebug as an add-on to see changes live and change things on the fly, followed by Opera’s Dragonfly and Safari and Chrome’s Devtools, all browsers now also have a lot of functionality that is there especially for developers. Firefox’s new developer tools go even further and instead of simply being a debugging environment are a set of tools in themselves that developers can extend to their needs.

Remote debugging is another option we have now. This means we can as developers change applications running on a phone on our development computers instead of having to write them, send them to the phone, install them, test them, find a mistake and repeat. This speeds up development time significantly.

For the more visual developers Adobe lately released their Edge suite which brings WYSIWYG style development to HTML5, including drag and drop from Photoshop. Adobe’s Edge Inspect and PhoneGap makes it easy to test on several devices at once and send HTML5 Apps as packaged native Apps to iOS and Android.

In terms of deployment and packaging Google just released their Yeoman project which makes it dead easy for web developers to package and deploy their web products as applications with all the necessary steps to make them perform well.

All in all there is no fixed development environment for HTML5 as that would neuter the platform – this is the web, you can pick and choose what suits you most.

Things HTML5 can do that native Apps can not

In essence a lot of the myths of HTML5 are based on the fact that the comparison was between something explicitly built for the platform it was tested on versus something that is also supported on it. Like comparing the performance of speedboat and a hovercraft would result in the same predictable outcome. The more interesting question is what makes HTML5 great for developers and end users, that native applications can or do not do:

  • Write once, deploy anywhere – HTML5 can run in browsers, on tablets and desktops and you can convert it to native code to support iOS and Android. This is not possible the other way around.
  • Share over the web – as HTML5 apps have a URL they can be shared over the web and found when you search the web. You don’t need to go to a market place and find it amongst the crowded, limited space but the same tricks how to promote other web content apply. The more people like and link to your app, the easier it will be found.
  • Built on agreed, multi-vendor standards – HTML5 is a group effort of the companies that make the web what it is now, not a single vendor that can go into a direction you are not happy with
  • Millions of developers – everybody who built something for the web in the last years is ready to write apps. It is not a small, specialized community any longer
  • Consumption and development tool are the same thing – all you need to get started is a text editor and a browser
  • Small, atomic updates – if a native app needs an upgrade, the whole App needs to get downloaded again (new level of Angry Birds? Here are 23MB over your 3G connection). HTML5 apps can download data as needed and store it offline, thus making updates much less painful.
  • Simple functionality upgrade – native apps need to ask you for access to hardware when you install them and can not change later on which is why every app asks for access to everything upfront (which of course is a privacy/security risk). An HTML5 app can ask for access to hardware and data on demand without needing an update or re-installation.
  • Adaptation to the environment – an HTML5 app can use responsive design to give the best experience for the environment without having to change the code. You can switch from Desktop to mobile to tablet seamlessly without having to install a different App on each.

Let’s see native Apps do that.

Breaking the hardware lockout and making monetization easier

The main reason why HTML5 is not the obvious choice for developers now is the above mentioned lockout when it comes to hardware. An iOS device does not allow different browser engines and does not allow HTML5 to access the camera, the address book, vibration, the phone or text messaging. In other words, everything that makes a mobile device interesting for developers and very necessary functionality for Apps.

To work around this issue, Mozilla and a few others have created a set of APIs to define access to these in a standardized way called Web APIs. This allows every browser out there to get access to the hardware in a secure way and breaks the lockout.

The first environment to implement these is the Firefox OS with devices being shipped next year. Using a Firefox OS phone you can build applications that have the same access to hardware native applications have. Developers have direct access to the hardware and thus can build much faster and – more importantly – much smaller Apps. For the end user the benefit is that the devices will be much cheaper and Firefox OS can run on very low specification hardware that can for example not be upgraded to the newest Android.

In terms of monetization Mozilla is working on their own marketplace for HTML5 Apps which will not only allow HTML5 Apps to be submitted but also to be discovered on the web with a simple search. To make it easier for end users to buy applications we partner with mobile providers to allow for billing to the mobile contract. This allows end users without a credit card to also buy Apps and join the mobile web revolution.

How far is HTML5?

All in all HTML5 is going leaps and bounds to be a very interesting and reliable platform for app developers. The main barriers we have to remove is the hardware access and with the WebAPI work and systems like PhoneGap to get us access these are much less of a stopper than we anticipated.

The benefits of HTML5 over native apps mentioned above should be reason enough for developers to get involved and start with HTML5 instead of spending their time building a different code base for each platform. If all you want to support is one special platform you don’t need to go that way, but then it is also pointless to blame HTML5 issues for your decision.

HTML5 development is independent of platform and browser. If you don’t embrace that idea you limit its potential. Historically closed platforms came and went and the web is still going strong and allows you to reach millions of users world-wide and allows you to start developing without asking anyone for permission or having to install a complex development environment. This was and is the main reason why people start working with the web. And nobody is locked out, so have a go.

 

Open Standards@EU

Open Standards@EU

Open Standards@EU

I am an strong believer and supporter of the adoption of and adherence to Open Standards, to the maximum extent possible (without ignoring specific context considerations, influence, applicability extent etc.). The Digital Agenda of Europe identified “lock-in” as a problem. Building open ICT systems by making better use of standards in public procurement will improve and prevent the lock-in issue.

Action 23 committed to providing guidance on the link between ICT standardisation and public procurement to help public authorities use standards to promote efficiency and reduce lock-in.

The Commission issued on June 2013 a Communication, accompanied by a Staff Working Document that contains a practical guide on how to make better use of standards in procurement, in particular in the public sector, and including some of the barriers.

Read more at OpenStandards@EU

The change to standards-based systems

Even though the short term costs might seem a barrier to change, in the long run. The change to a standards-based system will benefit the overall public procurement scenario. It should therefore be carried out on a long-term basis ( 5 to 10 years), replacing those systems which require a new procurement with alternatives that are standards-based.

This requires public authorities to list of all their ICT systems and understand how they work together, within their own organisation and with their stakeholders’ systems. They should identify which of these systems cannot be easily changed to other alternatives (these are systems causing the lock-in). For all these, they should consider alternatives standard-compelling.

In addition, the process should be replicated on every system part of the same network, improving adoption of common standards

Best practices

Fighting lock-in requires support by public authorities at all levels. Some countries are actively promoting the use of standards, and have already gained a lot of practical experience. In order to learn from their experience, the Commission organises meetings with public authorities, ICT supply industry, standards organisations and civil society from Europe.

By sharing their experience on a regular basis, public organisations learn from each other, adapt to emerging best practices and tackle common problems and solutions. This sharing of best practice will ensure that the choices made in different Member States will converge, reducing fragmentation and helping to ensure a real digital single market.

Neural networks that function like the human visual cortex may help realize faster, more reliable pattern recognition - PHYS.org

Artificial neural networks that can more closely mimic the brain’s ability to recognize patterns potentially have broad applications in biometrics, data mining and image analysis. Credit: janulla/iStock/Thinkstock

Artificial neural networks that can more closely mimic the brain’s ability to recognize patterns potentially have broad applications in biometrics, data mining and image analysis. Credit: janulla/iStock/Thinkstock

Despite decades of research, scientists have yet to create an artificial neural network capable of rivaling the speed and accuracy of the human visual cortex. Now, Haizhou Li and Huajin Tang at the A*STAR Institute for Infocomm Research and co-workers in Singapore propose using a spiking neural network (SNN) to solve real-world pattern recognition problems. Artificial neural networks capable of such pattern recognition could have broad applications in biometrics, data mining and image analysis.

Read the full original article from / reproduced from PHYS.ORG

Humans are remarkably good at deciphering handwritten text and spotting familiar faces in a crowd. This ability stems from the visual cortex—a dedicated area at the rear of the brain that is used to recognize patterns, such as letters, numbers and facial features. This area contains a complex network of neurons that work in parallel to encode visual information, learn spatiotemporal patterns and classify objects based on prior knowledge or statistical information extracted from patterns.

Like the human visual cortex, SNNs encode visual information in the form of spikes by firing electrical pulses down their ‘neurons’. The researchers showed that an SNN employing suitable learning algorithms could recognize handwritten numbers from the Mixed National Institute of Standards and Technology (MNIST) database with a performance comparable to that of support vector machines—the current benchmark for  methods.

Their SNN has a feedforward architecture and consists of three types of neurons: encoding, learning and readout neurons. Although the learning neurons are fully capable of discriminating patterns in an unsupervised manner, the researchers sped things up by incorporating supervised learning algorithms in the computation so that the learning  could respond to changes faster.

… Continue reading the full article from PHYS.ORG

More information: Yu, Q., Tang, H., Tan, K.C. & Li, H. Rapid feedforward computation by temporal encoding and learning with spiking neurons. IEEE Transactions on Neural Networks and Learning Systems 24, 1539–1552 (2013). dx.doi.org/10.1109/TNNLS.2013.2245677

 

Introducing Project Adam: a new deep-learning system - MSR

Members of the team that worked on the asynchronous DNN project: (from left) Karthik Kalyanaraman, Trishul Chilimbi, Johnson Apacible, Yutaka Suzue

Members of the team that worked on the asynchronous DNN project: (from left) Karthik Kalyanaraman, Trishul Chilimbi, Johnson Apacible, Yutaka Suzue

Project Adam is a new deep-learning system modeled after the human brain that has greater image classification accuracy and is 50 times faster than other systems in the industry.

Project Adam, an initiative by Microsoft researchers and engineers, aims to demonstrate that large-scale, commodity distributed systems can train huge deep neural networks effectively. For proof, the researchers created the world’s best photograph classifier, using 14 million images from ImageNet, an image database divided into 22,000 categories.

Included in the vast array of categories are some that pertain to dogs. Project Adam knows dogs. It can identify dogs in images. It can identify kinds of dogs. It can even identify particular breeds, such as whether a corgi is a Pembroke or a Cardigan.

Now, if this all sounds vaguely familiar, that’s because it is—vaguely. A couple of years ago, The New York Times wrote a story about Google using a network of 16,000 computers to teach itself to identify images of cats. That is a difficult task for computers, and it was an impressive achievement.

Project Adam is 50 times faster—and more than twice as accurate, as outlined in a paper currently under academic review. In addition, it is efficient, using 30 times fewer machines, and scalable, areas in which the Google effort fell short.

Read the full article/reproduced from Microsoft Research

 

Oracle Big Data SQL lines up Database with Hadoop, NoSQL frameworks

Hadoop

Hadoop

Hadoop continues to operate as a looming influence in the world of big data, and that holds true with the unveiling of the next step in Oracle’s big data roadmap.  Oracle’s latest big idea for big data aims to eliminate data silos with new software connecting the dots between the Oracle Database, Hadoop and NoSQL. By  for Between the Lines |

“Oracle has taken some of its intellectual property and moved it on to the Hadoop cluster, from a database perspective,” Mendelson explained.

need expertise with cloud / internet scale computing / mapreduce / hadoop  etc. ? contact me - i can help! - this is a core expertise area.

Read the original / reproduced from ZDNet

The Redwood Shores, Calif.-headquartered corporation introduced Oracle Big Data SQL, SQL-based software streamlining data running between the Oracle Database with NoSQL and Hadoop frameworks.

The approach is touted to minimize data movement, which could translate to faster performance rates for crunching numbers while also reducing security risks while in transit.

Big Data SQL promises to be able to query any and all kinds of structured and unstructured data. Oracle Database’s security and encryption features can also be blanketed over Hadoop and NoSQL data.

Beyond extending enterprise governance credit, Oracle connected plenty of dots within its portfolio as well. Big Data SQL runs on Oracle’s Big Data Appliance and is set up to play well with the tech titan’s flagship Exadata database machine. The Big Data SQL engine also borrowed other familiar portfolio elements such as Smart Scan technology for local data queries from Exadata.

The Big Data Appliance itself was built on top of Oracle’s cloud distribution, which has been in the works for the last three years.

Neil Mendelson, vice president of big data and advanced analytics at Oracle, told ZDNet on Monday that enterprise customers are still facing the following three obstacles: managing integration and data silos, obtaining the right people with new skill sets or relying on existing in-house talent, and security.

“Over this period of time working with customers, they’re really hitting a number of challenges,” Mendelson posited. He observed much of what customers are doing today is experimental in nature, but they’re now ready to move on to the production stage.

Thus, Mendelson stressed, Big Data SQL is designed to provide users with the ability to issue a single query, which can run against data in Hadoop and NoSQL — individually or any combination therein.

“Oracle has taken some of its intellectual property and moved it on to the Hadoop cluster, from a database perspective,” Mendelson explained.

In order to utilize Big Data SQL, Oracle Database 12c is required first. Production is slated to start in August/September, and pricing will be announced when Big Data SQL goes into general availability.

Also on Tuesday, the hardware and software giant was expected to ship a slew of security updates fixing more than 100 vulnerabilities across hundreds of versions of its products.

That is following a blog post on Monday penned by Oracle’s vice president of Java product management, Henrik Stahl, who aimed to clarify the future of Java support on Windows XP.

He dismissed claims that Oracle would hamper Java updates from being applied to systems running the older version of Windows or that Java wouldn’t work on XP altogether anymore.

Nevertheless, Stahl reiterated Oracle’s previous stance that users still running Windows XP should upgrade to an operating system currently supported.

need expertise with cloud / internet scale computing / mapreduce / hadoop  etc. ? contact me - i can help! - this is a core expertise area.

Read the original / reproduced from ZDNet

First major redesign of Rasberry Pi unveiled - TheEngineer

A new version of the credit card-sized computer, the Raspberry Pi, is launched today adding extra sensors and connectors to the £20 device.

A new version of the credit card-sized computer, the Raspberry Pi, is launched today adding extra sensors and connectors to the £20 device.

The new model, known as B+, represents the first major redesign of the Rasberry Pi since its commercial launch and features four USB ports to enable the computer to support extra devices without their own mains power connection.

The computer is designed and manufactured in the UK as a way of promoting computer science to young people.

But it has also been widely embraced by the wider amateur and professional engineering communities and used for projects from home-made drones to creating industrial PCs that can control hundreds of devices.

Read the original /reproduced from theEngineer.co.ok

The Raspberry Pi Foundation, the non-profit group that produces the device, hopes the extra connections and sensors will enable users to create bigger projects.

Eben Upton, CEO of Raspberry Pi Trading, said in a statement: ‘We’ve been blown away by the projects that have been made possible through the original B boards and, with its new features, the B+ has massive potential to push the boundaries and drive further innovation.’

Source: Raspberry Pi/Element 14

Source: Raspberry Pi/Element 14

The Raspberry Pi B+ is based on the same Broadcom BCM2835 Chipset and 512MB of RAM as the previous model.

It is powered by micro USB with AV connections through either HDMI or a new four-pole connector replacing the existing analogue audio and composite video ports.

The SD card slot has been replaced with a micro-SD, tidying up the board design and helping to protect the card from damage. The B+ board also now uses less power (600mA) than the Model B Board (750mA) when running.

It features a 40-pin extended GPIO, although the first 26 pins remain identical to the original Raspberry Pi Model B for 100% backward compatibility.

The Raspberry Pi Model B+ is available to buy today on the element14 Community.

Read the original /reproduced from theEngineer.co.ok


Reproduced and/or syndicated content. All content and images is copyright the respective owners.

© (all) respective content owner(s)
CyberChimps