9 Must-Have Skills You Need to Become a Data Scientist

Published by in From the WWW on November 23rd, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Over the past year, interest in data science has soared. Nate Silver is a household name, companies everywhere are searching for unicorns, and professionals in many different disciplines have begun eyeing the well-salaried profession as a possible career move.

In our recruiting searches here at Burtch Works, we’ve spoken to many analytics professionals who are considering adapting their skills to the growing field of data science, and have questions about how to do so. From my perspective as a recruiter, I wanted to put together a list of technical and non-technical skills that are critical to success in data science, and at the top of hiring managers’ lists. Reproduced from/read the original from BirtchWorks

Need expertise with Cloud / Internet Scale computing / Data Science / Analytics / Hadoop / Big Data / Machine Learning / Algorithms / Architectures etc. ? Contact me - i can help – this are primary expertise areas.

Every company will value skills and tools a bit differently, and this is by no means an exhaustive list, but if you have experience in these areas you will be making a strong case for yourself as a data science candidate.

Technical Skills: Analytics

  1. Education – Data scientists are highly educated – 88% have at least a Master’s degree and 46% have PhDs – and while there are notable exceptions, a very strong educational background is usually required to develop the depth of knowledge necessary to be a data scientist. Their most common fields of study are Mathematics and Statistics (32%), followed by Computer Science (19%) and Engineering (16%).
  1. SAS and/or R – In-depth knowledge of at least one of these analytical tools, for data science R is generally preferred.

Technical Skills: Computer Science

  1. Python Coding – Python is the most common coding language I typically see required in data science roles, along with Java, Perl, or C/C++.
  1. Hadoop Platform – Although this isn’t always a requirement, it is heavily preferred in many cases. Having experience with Hive or Pig is also a strong selling point. Familiarity with cloud tools such as Amazon S3 can also be beneficial.
  1. SQL Database/Coding – Even though NoSQL and Hadoop have become a large component of data science, it is still expected that a candidate will be able to write and execute complex queries in SQL.
  1. Unstructured data – It is critical that a data scientist be able to work with unstructured data, whether it is from social media, video feeds or audio.

Non-Technical Skills

  1. Intellectual curiosity – No doubt you’ve seen this phrase everywhere lately, especially as it relates to data scientists. Frank Lo describes what it means, and talks about other necessary “soft skills” in his guest blog posted a few months ago.
  1. Business acumen – To be a data scientist you’ll need a solid understanding of the industry you’re working in, and know what business problems your company is trying to solve. In terms of data science, being able to discern which problems are important to solve for the business is critical, in addition to identifying new ways the business should be leveraging its data.
  1. Communication skills – Companies searching for a strong data scientist are looking for someone who can clearly and fluently translate their technical findings to a non-technical team, such as the Marketing or Sales departments. A data scientist must enable the business to make decisions by arming them with quantified insights, in addition to understanding the needs of their non-technical colleagues in order to wrangle the data appropriately. Check out our recent flash survey for more information on communication skills for quantitative professionals.

Computer and BookThe next question I always get is, “What can I do to develop these skills?” There are many resources around the web, but I don’t want to give anyone the mistaken impression that the path to data science is as simple as taking a few MOOCs. Unless you already have a strong quantitative background, the road to becoming a data scientist will be challenging – but not impossible.

However, if it’s something you’re sincerely interested in, and have a passion for data and lifelong learning, don’t let your background discourage you from pursuing data science as a career. Here are a few of the resources we’ve found to be helpful:

 

Resources

  1. Advanced Degree – More Data Science programs are popping up to serve the current demand, but there are also many Mathematics, Statistics, and Computer Science programs.
  2. MOOCsCoursera, Udacity, and codeacademy are good places to start.
  3. Certifications – KDnuggets has compiled an extensive list.
  4. Bootcamps – For more information about how this approach compares to degree programs or MOOCs, check out this guest blog from the data scientists at Datascope Analytics.
  5. KaggleKaggle hosts data science competitions where you can practice, hone your skills with messy, real world data, and tackle actual business problems. Employers take Kaggle rankings seriously, as they can be seen as relevant, hands-on project work.
  6. LinkedIn Groups – Join relevant groups to interact with other members of the data science community.
  7. Data Science Central and KDnuggetsData Science Central and KDnuggets are good resources for staying at the forefront of industry trends in data science.
  8. The Burtch Works Study: Salaries of Data Scientists – If you’re looking for more information about the salaries and demographics of current data scientists be sure to download our data scientist salary study.
Need expertise with Cloud / Internet Scale computing / Data Science / Analytics / Hadoop / Big Data / Machine Learning / Algorithms / Architectures etc. ? Contact me – i can help – this are primary expertise areas.

Reproduced from/read the original from BirtchWorks

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

11 Innovators Receive HPC Excellence Awards – Scientific Computing

Published by in From the WWW on November 23rd, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

International Data Corporation (IDC) has announced the eighth round of recipients of the HPC Innovation Excellence Award at the SC’14 high performance computing (HPC) conference in New Orleans, Louisiana. Two sets of winners are announced each year, at the November SC conference in the U.S. and the June ISC HPC conference in Germany.

Read the original post/reproduced from Scientific Computing

The HPC Innovation Excellence Awards recognize noteworthy achievements by users of high performance computing technologies. The program’s goals are to showcase return on investment (ROI) and scientific success stories involving HPC; to help other users better understand the benefits of adopting HPC and justify HPC investments, especially for small and medium-size businesses (SMBs); to demonstrate the value of HPC to funding bodies and politicians; and to expand public support for increased HPC investments.

Need expertise with Cloud / Internet Scale computing / Hadoop / Big Data / Machine Learning / Algorithms / Architectures etc. ? Contact me – i can help – this are primary expertise areas.

“IDC research has confirmed that HPC can greatly accelerate innovation and in many cases can generate ROI. The award program aims to collect a large set of success stories across many research disciplines, industries, and application areas,” said Earl Joseph, Program Vice President for HPC at IDC. “The winners achieved clear success in applying HPC to improve business ROI, scientific advancement, and/or engineering successes. Many of the achievements will also directly benefit society.”

Winners of the prior seven rounds of awards included 41 organizations from the U.S., seven from the UK, four from Italy, three from the People’s Republic of China, two each from India and Slovenia, and one each from Australia, Canada, Sweden, South Korea, Switzerland, Germany, France, and Spain.

The new award winners and project leaders announced at SC’14 are as follows (contact IDC for additional details about the projects):

Argonne National Laboratory, NRG (Netherlands), SCK-CEN (Belgium), TerraPower, and the University of Illinois at Urbana-Champaign: Researchers from Argonne National Laboratory and the University of Illinois at Urbana-Champaign teamed with nuclear reactor designers and research laboratories in the United States and Europe to enable high-fidelity, cost-saving simulations to design the next-generation of nuclear reactors using the computational fluid dynamics code Nek5000. This research will result in multimillion-dollar savings for several companies and nuclear research centers. Project Leads: Paul Fischer, Elia Merzari, Aleks Obabko, and Shashi Aithal, Argonne National Laboratory.

The Center for Pediatric Genomic Medicine at Children’s Mercy Hospitals Kansas City was the first genome center in the world to be created inside a children’s hospital and one of the first to focus on genome sequencing and analysis for inherited childhood diseases. While most genome centers focus on research, the CPGM develops new clinical tests as a starting point for next‐generation medical treatments to improve outcomes in patients at Children’s Mercy and around the world. Using the TaGSCAN and STAT-seq applications, Children’s Mercy has reduced the overall diagnosis time and substantially helped affected children and their families. Project Lead: Dr. Stephen Kingsmore.

GIS Federal: For the US Army and the intelligence community as a whole, GIS Federal developed an innovative approach to quickly filter, analyze, and visualize in near real time big data streams from hundreds of data providers with a particular emphasis on geospatial data. GIS Federal leveraged the highly parallel compute power of graphical processing units (GPUs) to conduct the data processing. The solution generated multimillion-dollar revenues while saving tens of millions of dollars. Project Leads: Amit Vij and Nima Negahban.

North Carolina State University: Researchers from NCSU conducted innovative research that will allow better prediction of thermal hydraulic behavior for current and future nuclear reactor designs. They analyzed the turbulence anisotropy in single-phase and two-phase bubbly channel flows based on DNS data. These novel simulations will help academia and later industry. Multiphase flow model development for computational fluid dynamics already benefits from high fidelity simulations presented in this work. Project Lead: Igor A. Bolotnov (Department of Nuclear Engineering).

Nexio simulation is a French SME located in Toulouse and specializing in electromagnetic simulation and studies for marine, space, defense and aeronautics domains applications. Nexio simulation partnered with Bpifrance (the French public bank dedicated to SMEs), Inria, and GENCI to optimize and scale out their current simulation package called CAPITOLE-EM by using large scale HPC resources. While the maximum size of simulation was about 500,000 unknowns in 2011, Nexio was able to simulate 6 millions unknowns thanks to the use of HPC. This major improvement allowed Nexio to win two major contracts with Japanese aerospace companies and to participate in the first PRACE SHAPE call and the Fortissimo project. Project Lead: Pascal De Resseguier.

NASA: The noise generated by civil air transport adversely impacts population centers near major airports. With the expected growth in air travel, community exposure to aircraft noise will increase considerably. To alleviate this problem, the Environmentally Responsible Aviation (ERA) project within NASA’s Aeronautics Mission Research Directorate is working to simultaneously reduce aircraft noise, fuel consumption, and engine emissions. High-fidelity simulations are being used to provide an accurate representation of the aerodynamic mechanisms that produce airframe noise (a prominent component of noise during aircraft landing) and to evaluate a suite of novel noise reduction concepts for aircraft flaps and landing gear. Project Lead: Mehdi R. Khorrami.

Central Michigan University researchers used HPC resources to run and visualize a breakthrough simulation involving a long-track EF5 tornado embedded within a supercell. Code was developed to utilize buffered HDF5 output in the CM1 model in order to achieve satisfactory throughput when doing I/O. A plugin was developed to interface VisIt to the CM1 output format. This research adds innovative improvements to the existing simulation workflow, potentially enabling operational use of CM1 models. Project Lead: Leigh Orf.

PayPal engineers developed a platform for real‐time event analytics using HPC designs on new hardware technology. By converting traditional text data into digital signals through a process of encoding and mapping, the engineers used multi-core digital signal processors in the HP/Texas Instruments Moonshot m800 platform to deliver high performance/low‐latency processing with very low power (approx. 11.2GF/watt). A truly revolutionary approach, PayPal’s method brings the rich legacy of digital signal processing’s capabilities to real-time analytics for the first time. Project Lead: Ryan Quick and Arno Kolster.

The University of Texas MD Anderson Cancer Center, Texas Advance Computing Center (TACC) and Elekta AB: Researchers at MD Anderson Cancer Center in collaboration with TACC and Elekta AB are using detailed Monte Carlo computer simulations of radiation transport to assist in the development of the next generation of radiation therapy cancer treatments, which use a magnetic resonance imaging (MRI) scanner integrated with a radiation therapy unit (MRI-linac unit). The results of the simulations have demonstrated that the response of radiation detectors in the presence of magnetic fields can be predicted and accounted for, enabling researchers to calibrate the new MRI-linac units. This research is expected to lead to the development of new methods and procedures for the use of radiation detectors in the presence of magnetic fields. This will make possible the implementation and safe use of new MRI-linac units. Ultimately, the results of this project will directly contribute to improve treatment outcome of cancer patients. Project Lead: Gabriel O. Sawakuchi.

Researchers at Ohio State University Cancer Comprehensive Care Center developed and implemented bioinformatics and molecular methods to understand what happens to human papillomavirus (HPV) DNA in the “end game” of HPV-positive human cancers. Approximately 15 percent of all cancers are caused by viruses, including HPV, but the mechanisms by which such viruses cause cancers have remained mostly unknown. Using the new Oakley supercomputer at the Ohio Supercomputer Center, the OSU researchers found that in virtually every HPV-positive cancer cell line that they studied, HPV had integrated into the host genome and was associated with focal genomic instability. They used this insight to develop a new model called the HPV looping model. This research could eventually have a significant impact on how cancer doctors detect and treat different types of virus-associated cancer. Project Leads: Drs. David E. Symer, Keiko Akagi, Jingfeng Li, and Maura L. Gillison, and the Ohio Supercomputer Center.

Tech-X Corporation: To heat magnetically confined plasmas to the millions of degrees needed for fusion reactions, scientists inject megawatts of electromagnetic energy from carefully engineered radiofrequency antennas. The generated electromagnetic waves interact with the plasma in complex ways. Scientists at Tech-X Corporation modeled these interactions using their VSim software at increasing levels of detail, scaling up to 184,000-core, billion-grid-cell simulations on the Titan Cray XK7 at the Oak Ridge Leadership Computing Facility (OLCF). They are now using these simulation results (in collaboration with other members of the SciDAC Center for Simulation of Wave-Plasma Interactions) to develop and refine predictive models for wave-plasma interactions in the reactor core and edge. Such computations enable the identification of more efficient operational regimes for existing magnetic fusion experiments, and provide predictive capabilities for future experimental devices. This work was funded by DOE grant DE-SC0009501, with OLCF computing resources provided by an ALCC award sponsored by DOE’s Office of Science, Contract No. DE-AC05-00OR22725. The VSim team provided the VSim computational application. Project Leads: Thomas G. Jenkins and David N. Smithe.

Read the original post/reproduced from Scientific Computing

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Georgia Tech Professor Proposes Alternative to ‘Turing Test’

Published by in From the WWW on November 23rd, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Mark Riedl. Associate Professor, School of Interactive Computing

Mark Riedl. Associate Professor, School of Interactive Computing

A Georgia Tech professor is offering an alternative to the celebrated “Turing Test” to determine whether a machine or computer program exhibits human-level intelligence. The Turing Test – originally called the Imitation Game – was proposed by computing pioneer Alan Turing in 1950. In practice, some applications of the test require a machine to engage in dialogue and convince a human judge that it is an actual person.

Read the full article from/Reproduced from Georgia Tech

Creating certain types of art also requires intelligence observed Mark Riedl, an associate professor in the School of Interactive Computing at Georgia Tech, prompting him to consider if that might lead to a better gauge of whether a machine can replicate human thought.

“It’s important to note that Turing never meant for his test to be the official benchmark as to whether a machine or computer program can actually think like a human,” Riedl said. “And yet it has, and it has proven to be a weak measure because it relies on deception. This proposal suggests that a better measure would be a test that asks an artificial agent to create an artifact requiring a wide range of human-level intelligent capabilities.”

To that end, Riedl has created the Lovelace 2.0 Test of Artificial Creativity and Intelligence.

For the test, the artificial agent passes if it develops a creative artifact from a subset of artistic genres deemed to require human-level intelligence and the artifact meets certain creative constraints given by a human evaluator. Further, the human evaluator must determine that the object is a valid representative of the creative subset and that it meets the criteria. The created artifact needs only meet these criteria but does not need to have any aesthetic value. Finally, a human referee must determine that the combination of the subset and criteria is not an impossible standard.

The Lovelace 2.0 Test stems from the original Lovelace Test as proposed by Bringsjord, Bello and Ferrucci in 2001. The original test required that an artificial agent produce a creative item in such a way that the agent’s designer cannot explain how it developed the creative item. The item, thus, must be created in such a way that is valuable, novel and surprising.

Riedl contends that the original Lovelace test does not establish clear or measurable parameters. Lovelace 2.0, however, enables the evaluator to work with defined constraints without making value judgments such as whether the artistic object created surprise.

Riedl’s paper, available here, will be presented at Beyond the Turing Test, an Association for the Advancement of Artificial Intelligence (AAAI) workshop to be held Jan. 25 to 29, 2015, in Austin, Texas.

Read the full article from/Reproduced from Georgia Tech

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Google Lifts the Turing Award Into Nobel Territory – NYT

Published by in Snippet on November 16th, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

The A.M.Turing Award is often called the Nobel prize of computer science. Now, thanks to Google‘s largesse, it will be a Nobel-level prize financially: $1 million.

The quadrupling of the prize money, announced on Thursday by the Association for Computing Machinery, the professional organization that administers the award, is intended to elevate the prominence and recognition of computer science. The move can be seen as another sign of the boom times in technology.

Read the original news / reproduced from NYT

Computing is increasingly an ingredient in every field, from biology to business. College students are rushing to take computer science courses, encouraged by their parents. It’s not just a skill but a mind-set. Computational thinking is the future, where the excitement and money is. Quants rule.

But the Turing Award celebrates the slower and deeper side of computing. It is given, said Alexander L. Wolf, president of A.C.M. and a professor of computer science at Imperial College London, to the “true pioneers” who are “fundamental contributors to the science and technology of computing.”

Previous recipients have not been household names, except in very geeky households. They did not make fortunes, but they created the underlying insights in mathematics, and in software and hardware design, that helped make personal computers, the Internet, online commerce, social networks and smartphones a reality.

Yet computer science is also a practical, problem-solving discipline. At the announcement event on Thursday morning in New York, Stuart Feldman, vice president of engineering at Google, said he had been impressed by the blend of the theoretical and practical sides of computing in reading over the Turing Award citations since 1966. The prize, he said, recognizes both “the finest of thought and the broadest of impact.”

The Turing Award had carried prize money of $250,000 and was jointly underwritten by Google and Intel since 2007. But Intel decided to step away as a funder, and Google stepped up and upped the ante. The million-dollar award essentially matches the Nobel prize’s 8 million Swedish kronor, which is a bit more than $1 million at current exchange rates.

Albert Fert, winner of the Nobel Prize in Physics in 2007, left, with Joseph Sifakis, winner of the Turing Award that year. Since then, the Turing Award had carried prize money of $250,000, jointly underwritten by Google and Intel.Credit Olivier Laban-Mattei/Pool Photo, via Reuters

Albert Fert, winner of the Nobel Prize in Physics in 2007, left, with Joseph Sifakis, winner of the Turing Award that year. Since then, the Turing Award had carried prize money of $250,000, jointly underwritten by Google and Intel.Credit Olivier Laban-Mattei/Pool Photo, via Reuters

Increasing the financial reward, Mr. Feldman said, lifts the Turing Award into the “major league of scientific prizes.”

For Google, being the deep-pocketed benefactor of the Turing Award is both good branding and a public statement that it takes fundamental research seriously.

“Computing is our lifeblood,” Mr. Feldman said. The company, he added, hopes that increasing the prize money will give greater public prominence and recognition to the importance of computer science.

Silvio Micali, one of the handful of Turing winners in attendance, spoke of the spread of computer science into other disciplines. By now, Mr. Micali said, computing is well-established in the sciences. But the importance of computing and its reach, he said, is destined to accelerate further across the economy and society.

Mr. Micali, a professor at the Massachusetts Institute of Technology who won the prize in 2013, urged young computer scientists to go deep and take on big challenges instead of focusing on “lesser and easier targets” with quicker payoffs.

The patient pursuit of computing research can seem out of step at a time when even many undergraduate computer science students drop out of school to join start-ups. When asked about that, Mr. Wolf said both the creation and the commercialization of technology are needed. “Some people invent the foundations on which others can build,” he said, “and others — some of them dropouts — are those that make these technologies massively available to people and society.”

Google itself reflects that combination. Its founders, Larry Page and Sergey Brin, were both Ph.D. candidates in computer science at Stanford University when they created the concepts behind Google search technology.

The legacy of Alan Turing is certainly getting a boost this month, and not just from more money for his namesake award. Next comes the release (on Friday in Britain, and later this month in the United States) of “The Imitation Game,” a movie version of Turing’s life, starring Benedict Cumberbatch as the English polymath and Keira Knightley as Joan Clarke, a friend and fellow code-breaker at Bletchley Park, where German World War II codes were successfully deciphered.

There is also the reissue in paperback of “Alan Turing: The Enigma,” a biography of the man known as the father of theoretical computer science and artificial intelligence. The book, written by Andrew Hodges, was originally published in 1983. The new version carries the additional subtitle, “The Book That Inspired the Film, The Imitation Game.”

Past Turing winners, though, will not be getting a felicitous bump. The enriched prizes will not be retroactive.

“I just asked,” Butler W. Lampson joked at the announcement event in New York. Mr. Lampson was a leader in the 1970s at Xerox PARC, where so much of underlying technology of personal computing, adopted by Apple and Microsoft, was built.

For that work, Mr. Lampson won a Turing Award in 1992. Today, Mr. Lampson is a scientist in Microsoft’s research labs and an adjunct professor at M.I.T. Envious of the paychecks for future winners? “Oh, I’ll get by,” he replied, smiling.

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Tags:

Ballmer says machine learning will be the next era of computer science – Computer World

Published by in From the WWW on November 16th, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Former Microsoft CEO Steve Ballmer. Credit: Reuters/2013 file photo

Former Microsoft CEO Steve Ballmer. Credit: Reuters/2013 file photo

The former CEO of Microsoft said the next era of computer science is going to focus on machine learning.

Steve Ballmer, who headed Microsoft from 2000 to 2014, said when looking at what’s ahead in computer science research, he’s most excited about machine learning, the science of building algorithms so that computers can recognize behavior in large data sets.

“I think it’s the dawn of an exciting new era of info and computer science,” Ballmer told Computerworld. “It’s a new world in which the ability to understand the world and people and draw conclusions will be really quite remarkable… It’s a fundamentally different way of doing computer science.” –  By Read the original article/reproduced from ComputerWorld

Need expertise with Cloud / Internet Scale computing / Hadoop / Big Data / Machine Learning / Algorithms / Architectures etc. ? Contact me – i can help – this are primary expertise areas.

Ballmer, who stepped down from Microsoft’s board of directors in August, spoke about the future of computer science as he announced a donation to Harvard University that will enable the school to expand its computer science faculty by 50%.

Neither Ballmer, nor two Harvard representatives, would divulge the amount of his donation.

Ballmer’s received a bachelor’s degree in applied mathematics and economics from Harvard College in 1977. He joined another Harvard alum, Bill Gates, at Microsoft in 1980. Today, Ballmer is the owner of the Los Angeles Clippers basketball team.

Harvard wasn’t the only academic institution to receive a large donation from Ballmer this week.

Ballmer and his wife Connie, a graduate of the University of Oregon, on Wednesday also announced a $50 million donation to her alma mater. That gift will be used to increase higher education opportunities and to beef up the university’s research efforts.

Ballmer said he is making the donation to Harvard to help make the university’s computer science department one of the top programs in the country.

“There are other great schools, but this is the one I went to,” Ballmer said. “Boston and Cambridge are the only places, other than Silicon Valley, where you can see that entrepreneurial flywheel spinning. We’re supporting new ideas and new work at Harvard, supported by computer science.”

He added that he hopes the university will also focus teaching and research on areas like online privacy and cybersecurity, but his main focus is on machine learning.

“It’s not about just putting in input and getting an answer,” Ballmer noted. “Computer science evolves and changes. This is going to be a fundamental area. I’m not trying to pick [what Harvard focuses on] but we do share a passion for this being a leading edge over the next several years.”

Machine learning is linked to artificial intelligence, the development of computers with skills that traditionally would have required human intelligence, such as decision-making and visual perception.

Artificial intelligence has been in the headlines since Elon Musk, the high-profile CEO of electric car maker Tesla Motors and CEO and co-founder of SpaceX, said in an interview at an MIT symposium that AI is nothing short of a threat to humanity.

“With artificial intelligence, we are summoning the demon,” Musk said at the end of last month.

Ballmer said he isn’t troubled about scientists pushing ahead with research into artificial intelligence or machine learning.

“It doesn’t concern me,” he said. “At the end of the day, will we have to have other innovations that protect people from privacy and security [problems]? Of course we will… I don’t think being afraid of any innovation is a good thing.”

He added that he doesn’t think self-driving cars, which would require artificial intelligence and machine learning, will proliferate for another 10 years. “I won’t be getting in any of them any time soon, at least not in the streets of Cambridge,” he said.

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Big-data project gives birds-eye view of the G20 – QUT News

Published by in From the WWW on November 16th, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Dr Peta Mitchell is mapping the G20 Leaders’ Summit as it plays out on Twitter and Instagram.

Dr Peta Mitchell is mapping the G20 Leaders’ Summit as it plays out on Twitter and Instagram.

A QUT researcher is mapping the G20 Leaders’ Summit as it plays out on Twitter to find out how the event is affecting those inside the “barricades”.

In a first for any G20 event, the blue-sky big-data research project is mining and analysing tweets sent from inside the declared areas for location-based information.

Read the original article /reproduced from QUT News

Mapping the G20 – what does it look like?
•Mega-event Twitter study from inside the declared areas.
#ColourMeBrisbane – kaleidoscope of the hashtag so far.
G20 Hypometer – discover the most tweeted-about G20 countries.

Dr Peta Mitchell is then plotting those – minus the corresponding usernames – on an interactive map the public can explore via a website.

Need expertise with Cloud / Internet Scale computing / Hadoop / Big Data / Machine Learning / Algorithms / Architectures etc. ? Contact me – i can help – this are primary expertise areas.

“No project has mapped G20 social media data in this way before, so we don’t really know what kind of information we’ll glean – or even how many tweets to expect each day from within the declared areas,” said Dr Mitchell, who is leading the project for QUT’s Social Media Research Group (SMRG) within the Creative Industries Faculty.

“I’m expecting to collect things like people’s reactions to the various G20 cultural activities, their opinions on traffic and public transport disruptions, political commentary as well as celebrity-spotting.

“Mapping tweets to specific locations is problematic because only 1-3 per cent of Twitter users turn on their location services feature, which means I’ll need to look at the content of each tweet to glean information about where that user was standing when they sent it.”

Dr Mitchell is using a process known as geoparsing to sift through potentially thousands of non-geotagged tweets per day, honing in on mentions of streets or landmarks within the declared areas, and assigning geographic coordinates to them.

She is also tracking delegates’ tweets and existing and emerging G20-related hashtags, such as #G20, #OnMyAgenda, #ColourMeBrisbane, #G20Cultural and #WalkingG20.

The SMRG, based in QUT’s Creative Industries Faculty, has already identified more than 180,000 G20-related tweets globally in the past two months.

Dr Mitchell said her big data mining and mapping project will provide a test bed for future large-scale events.

“From a pure research point of view, I want to know how a disruptive event like the G20 affects people’s mobility, and how it changes their perspective of Brisbane.

“Brisbane City Council has set clear aspirations for being: an accessible, connected city; a friendly, safe city; and a New World City. The G20 is an extremely prestigious event that may put those aspirations at odds with each other.

“Understanding how people use social media to talk about the disruptions that big events cause in their daily lives is very useful for organisations involved in planning large events – governments, emergency services departments, transport authorities, event organisers and even insurance underwriters,” Dr Mitchell said.

“This project is also a good way for the residents of Brisbane to get involved with the G20 even if they don’t want to be physically close to the city centre while it’s on.

“They can access the interactive map and see at a glance the emerging patterns and clusters of Twitter activity, and the content of the tweets, although not who their authors are.”

The interactive map of G20 tweets will be available to the public from Monday November 13 on the SMRG website and will be added to throughout the week.

Visitors to the site can also watch the SMRG’s G20 Hypometer, which tracks the G20 countries people are most tweeting about.

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Malware “Ecology” Viewed as Ecological Succession: Historical Trends and Future Prospects – Cornell University Library

Published by in Snippet on November 10th, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Malware “Ecology” Viewed as Ecological Succession: Historical Trends and Future Prospects

The development and evolution of malware including computer viruses, worms, and trojan horses, is shown to be closely analogous to the process of community succession long recognized in ecology. In particular, both changes in the overall environment by external disturbances, as well as, feedback effects from malware competition and antivirus coevolution have driven community succession and the development of different types of malware with varying modes of transmission and adaptability.

Comments: 13 pages, 3 figures
Subjects: Cryptography and Security (cs.CR); Populations and Evolution (q-bio.PE)
Cite as: arXiv:1410.8082 [cs.CR]
(or arXiv:1410.8082v1 [cs.CR] for this version)

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Extracting data from air-gapped computers via mobile phones – Net Security

Published by in From the WWW on November 9th, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

godwincaruana.meA group of researchers from the Department of Information Systems Engineering at Ben-Gurion University in Israel have demonstrated and detailed a technique that can allow attackers to exfiltrate data from an “air-gapped” computer.

Read the original/reproduced from Net-Security

More often than not, computers housing sensitive data – whether it belongs to the government, a business, or any other type of organization – are kept off the Internet and internal networks and have their Bluetooth feature switched off in order to prevent attackers easily reaching and compromising them and the information they hold.

Often, even those individuals that are allowed to access or simply be in the vicinity of these computers are prohibited of having a mobile phone with them, which is usually left in a locker somewhere on the premises, but not very near to the place where these computers are located. Still, this security procedure can be violated, by accident or on purpose, and mobile phones might be brought close enough to be used in an attack.

The researchers dubbed their technique “AirHopper.” The premise for making it work is that the attacker has already compromised the computer containing the sensitive data, and is now looking for a way to exfiltrate it in without anyone noticing.

“While it is known that software can intentionally create radio emissions from a video display unit, this is the first time that mobile phones are considered in an attack model as the intended receivers of maliciously crafted radio signals,” they explained in their paper.

They proved that a mobile phone with an FM radio receiver – whether it belongs to the attacker or to an individual working in the organization, oblivious that his phone has been compromised – can be used to extract the data by collecting the radio signals emanating from the compromised computer.

Their research proved that textual and binary data can be exfiltrated from physically isolated computer to mobile phones at a distance of 1-7 meters- The transfer of the data is relatively slow – 13-60 Bps – but still fast enough to extract things like passwords.

It is widely believed what this type of attack is already being performed by intelligence agencies, and the US NSA in particular.

There are ways to prevent this type of attack. “Countermeasures of the technical kind include physical insulation, software-based reduction of information-bearing emission, and early encryption of signals. Procedural countermeasures include official practices and standards, along with legal or organizational sanctions,” the researchers noted.

Read the original/reproduced from Net-Security

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

European research funding: it’s like Robin Hood in reverse – The Guardian

Published by in Snippet on November 9th, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

‘Researchers from eastern Europe have next to no chance of getting an ERC grant.’ Photograph: Alamy

European research funding: it’s like Robin Hood in reverse.

The EU’s Horizon 2020 programme has a budget of £63bn, but don’t expect a share unless you’re in one of the wealthiest countries and have a string of articles published in top journals

Anonymous academic. theguardian.com, Friday 7 November 2014 07.00 GMT. Read the full article from The Guardian

If you ask officers of the European commission in research and innovation whether any of the funding attached to Horizon 2020 (the biggest EU research and innovation programme ever) will improve research career conditions, they are likely to politely cough, roll their eyes and answer: yes. They’d point, for example, to the European Research Council (ERC) starting grants and the Marie-Sklodowska-Curie individual fellowships which both fall under the £63bn programme. But are such initiatives really having an impact?

On the one hand, the ERC starting grants are extremely competitive. They are supposed to fund only the cream of excellent European researchers. It has been argued that this focus on research excellence is a smokescreen for funding austerity. At any rate, three hundred researchers received a starting grant from ERC in 2013, with a success rate of about 10%.

Many young researchers have just given up applying. The chances of being supported by host institutions with their application, particularly if not doing research in a trendy field, are remote – especially if they haven’t won the lottery of publishing in elite journals such as Nature and Science.

Researchers from eastern Europe have next to no chance of getting an ERC grant. In fact, the ERC increasingly looks like a reversed Robin Hood scheme, given that most ERC funds go to well-off countries. For example, in 2013 about three out of four ERC starting grants (222 out of 300) went to researchers hosted by institutions in the UK, Germany, Israel, France, the Netherlands and Switzerland.

By increasing competition for a small number (yes, 300 is a small number for the whole of Europe – in 2010 around 100,000 PhDs were awarded in the EU) of positions, the ERC is not actually smoothing the transition from early career to established researcher in Europe.

On the contrary, ERC grants are increasing inequality (and thus stress) among researchers. It has already been argued that academia, which relies on a supply of outsiders who are willing to forgo decent wages in the hope of getting a well-paid and prestigious tenured job, resembles a the dynamics of a drug gang – ERC grants are making the situation worse.

Ironically, the few lucky researchers who win ERC starting grants are forced to abandon real research and become mini-funding council managers because the money is so generous. A single grant is worth up to €2m (£1.6m approx), when the average postdoc in Italy and many other EU countries has to live on €1,000 (£784 approx) per month, if they are fortunate enough to have a contract.

What will other researchers do, given the widespread research funding cuts across Europe? Well, they can apply for postdoc positions advertised for the projects of the successful ERC applicants. I have seen ads for three month postdocs in such ERC projects.

agree ? disagree ? can you relate ?  sounds familiar ? . . . . .

Continue reading the full article at The Guardian

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

Tags: ,

How to Exchange Encrypted Messages on Any Website – MIT News

Published by in From the WWW on November 9th, 2014 | Comments Off

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

godwincaruana.meStrong encryption is the best way to ensure that no one can read messages you send online. After last year’s revelations about U.S. Internet surveillance raised interest in privacy tools, Google and Yahoo both announced they were working on software to let people who use their e-mail services easily exchange encrypted messages.

Now a prototype browser extension called ShadowCrypt, made by researchers at the University of California, Berkeley, and the University of Maryland, goes even further. It makes it easy to send and receive encrypted text on Twitter, Facebook, or any other website.

A new tool brings simple encrypted messaging to any webmail or social networking site. By Tom Simonite on November 5, 2014. Read the original/reproduced from MIT news

Using ShadowCrypt, a person who writes or is authorized to read a tweet or e-mail sees normal text. The site operator or anyone else looking at or intercepting the posting would see a garbled string of letters and numbers.

ShadowCrypt was created to show that strong encryption could be made both simple to use and compatible with popular services such as Twitter, says Devdatta Akhawe, a security engineer at Dropbox who helped develop ShadowCrypt as a grad student at Berkeley. “We wanted to show how you could make a practical, fast mechanism that is easy to use,” he says. Akhawe and colleagues tested ShadowCrypt on 17 different major Web services; it worked more or less flawlessly on 14, including Facebook, Twitter, and Gmail.

PGP, software first released in 1991, is probably the best-known software for encrypted messaging, but it is notoriously difficult to master. In general, existing tools for encrypted messaging tend to either require switching to a new service, such as Silent Circle (see “An App Keeps Spies Away from Your Phone”), or are very clunky.

To use ShadowCrypt you install the extension and then create encryption keys for each website you wish to use it with. A small padlock icon at the corner of every text box is the only indication that ShadowCrypt is hiding the garbled encrypted version that will be submitted when you hit the “send” or “post” button.

Other people can read that text if you provide them with the encryption key used to create it to add to their own ShadowCrypt settings. After they have done that, any text they view that has been encrypted with that key appears normal to them.

For example, the tweet below is perfectly readable to anyone that has installed ShadowCrypt, because it was encrypted using the extension’s default key for Twitter.com. Multiple keys can be made for any one site and it is easy to choose from them. You might use a different one for each person you wish to e-mail securely, for example.

ShadowCrypt is still a research project, but independent cryptography researcher Justin Troutman says its design demonstrates a useful new approach to online security.

That’s because it offers a way for people to take control of the security of the data they put into a Web service, he says. More often, most attention is paid to protecting data only as it travels to and from service providers’ servers. “It’s a step toward building a more benign surface for interacting with Web apps,” says Troutman.

A paper on ShadowCrypt, the code for which is open-source, will be presented at the ACM Conference on Computer and Communications Security this week.

A new tool brings simple encrypted messaging to any webmail or social networking site. By Tom Simonite on November 5, 2014. Read the original/reproduced from MIT news

Reproduced and/or syndicated content. All content and images is copyright the respective owners.

© all content copyright respective owners
CyberChimps