ARM details next-generation 8-way graphics core

ARM Holdings plc has announced the next core in its Mali line of graphics processing units, which is intended to start appearing in smartphone system-chips in 2013.
The Mali-T658 design supports up to 8 shader cores, compared with the Mali-T604’s four shader cores, and ARM has also doubled the number of arithmetic pipelines per shader core from two to four.

ARM (Cambridge, England) claims, the result is up to 10 times the graphics performance of Mali-400 GPUs found in mainstream consumer products today in 40-nm silicon and four times the GPU compute performance of the quad-core Mali-T604 GPU. The Mali-T604, the previous top-of-the-range, launched at the ARM TechCon in 2010. That core is expected to appear in silicon in 2012.

Like its predecessor the Mali-T658 is expected to perform some general–purpose computing on suitable applications, which may include image processing, augmented reality or running physics engine software for games. ARM has had at least two more graphics cores on its graphics roadmap to take performance up and to the right.

And for now ARM claims the Mali-T658 will give the company performance leadership over its rival graphics core licensor Imagination Technologies Group plc (Kings Langley, England). Imagination’s top-of-the-line GPU is a PowerVR Series 6 design that goes by the codename Rogue. “Imagination and Vivante; they are the competition,” said Ian Smythe, director of marketing for the media processing division.

However, ARM also claims that Mali in all its versions, with 57 licenses and 29 SoCs designed, is now the most widely licensed GPU architecture.

ARM emphasizes that with the Mali-T658 SoC designers are able to make use of a carefully crafted system-level approach to multicore design. That approach that includes ARM Cortex processor cores, the little-big power efficiency technology and cache-coherent interconnect.

As a result designers are expected to target high-end smartphones on 28-nm silicon with quad-core Mali-T658 coming to market in 2013 and eight-cored Mali-T658 graphics units to be 20-nm silicon in 2015. The core is also expected to find application in tablet computers, smart-TVs and automotive infotainment systems.

Mali-T658 will be able to take on computation tasks in applications such as image processing or augmented reality. The core has been made compatible with the recently announced A7-A15 little-big coupling so that as computation is moved on to the T658 it may accompany the movement of the core program down from the A15 to the A7, said Jem Davies, vice president of technology for the media processing division at ARM. The autonomous nature of the Mali Job Manager, and its ability to carry on graphics processing with a reduced load on the CPU, means it is suited to working alongside a big-little CPU system. By using the right processor for the right task the Mali-T658 is able to handle GPU compute tasks in parallel with the CPU handling the always-on always-connected tasks. ARM CoreLink system IP enables system-level cache coherency across clusters of multicore processors, including the Cortex-A15 and Mali-T658.

In addition the Mali-T658 is compatible with the ARMv8 full 64-bit instruction-set architecture, as is the Mali-T604.

ARM lead partners on the development of the Mali-T658 are listed as Fujitsu Semiconductor, LG Electronics, Nufront and Samsung.

As would be expected the Mali-T658 GPU supports all popular graphics and compute APIs, including Microsoft DirectX 11, Khronos OpenGL ES, Open VG, Khronos OpenCL, Google Renderscript and Microsoft DirectCompute.

Posted in Uncategorized | Leave a comment

Google’s Moto bid: It’s all about the patents

Google’s $12.5 billion bid for Motorola Mobility is all about the patents. The only people really happy about Google acquiring a smartphone and set-top business are Android’s competitors.
By merging Motorola’s 17,000 issued patents with its own, the Internet giant hopes to create a legal shield that protects the Android ecosystem from death by a thousand patent suits. Indeed, the legal threats against Google and its mobile partners have been growing as fast as Android’s market share.

But a Motorola acquisition also creates a hornet’s nest of problems for Google and its Android smartphone and set-top partners. A wide community of embedded developers in everything from airplanes to X-ray machines will now watch to see how well the ecosystem weathers the inevitable storms.

In the end, there’s no doubt Google was bold, but it may take years before anyone knows whether it was wise.

Don’t believe for a second Google wants to be a maker of smartphones or set-tops. Such hardware businesses with their complex supply chains and rapid product cycles are anathema to Web companies today.

Google clearly felt it had its back against the wall. In an August 3 blog post, Google’s chief legal officer, David Drummond, signaled the rising heat when he made accusation of “a hostile, organized campaign against Android by Microsoft, Oracle, Apple and other companies, waged through bogus patents.”

Oracle shot one of the first big volleys in August 2010 when it filed a broad suit against Google for infringement Java patents in Android. Drummond pointed to separate suits against Android partners including Barnes & Noble, HTC, Motorola and Samsung.

In a more low profile effort, Apple, EMC, Microsoft and Oracle quietly created a consortium called CPTN Holdings LLC to buy 882 Novell patents that read broadly on areas including open source software including Linux and virtualization. The U.S. Department of Justice forced the companies in April to revise their deal in an effort to ensure the patents would be fairly licensed.

The big blow came last month when a consortium including Apple, Microsoft and Research in Motion acquired for $4.5 billion about 6,000 Nortel patents, many of them said to be fundamental wireless patents. Google quickly bought 1,000 patents from IBM, but it clearly thought the move was not enough.

With the Nortel patents, competitors could levy patent licensing fees of as much as $15 per Android handset, Drummond said in his blog. Press reports suggested Microsoft stood to make more on the sale of Android handsets than on its one Windows Mobile 7 software.

The fact that a group of competitors could orchestrate a set of legal moves that threaten to freeze free software out of the market is a potent indictment of the current patent system. The top-line consequences of Google’s reaction are equally stunning.

Google is paying an estimated 63 percent premium for Motorola, making it the biggest proposed acquisition of it history to date. Google even agreed to pay a whopping $2.5 billion fee if it walks away from the deal, according to the New York Times.

The all-cash deal uses about a quarter of Google’s cash reserves. The fact that such a large big is motivated primarily to protect a strategic part of its patent portfolio further underscores the absurdity of the situation. But this is not the first time this absurdity has come to light in the electronics industry.

Even in the smartphone sector, Research in Motion was stung in 2006 by a $612 million settlement when it mishandled a suit against its implementation of mobile email.

In recent years a herd of so-called patent trolls have emerged solely to acquire and assert patents. They have spawned a separate set of companies geared to help large product companies cope with their threats including Intellectual Ventures, Allied Security Trust and RPX Corp.

The rise of patent litigation has turned up the heat in recent years for patent reform. The Supreme Court has handed down several decisions on a wide variety of issues to deal with some of those problems. A bill still awaiting final approval in Congress aims to tackle others, but experts say none of the moves will impact the heated patent battle around Android.

Google would not comment on how the acquisition of Motorola’s patents—including about 7,500 Motorola patent applications in process—might shift its legal strategy in the many Android suits. “But we will be in a very good position to protect the legal situation for all the [Android] partners,” said Drummond on a conference call announcing the deal.

“Motorola’s IP team has lots of experience dealing with the patent assertion in the wireless space, and you would expect Google to make good use of that team,” said Mike McLean, a vice president of professional services at TechInsights, an IP consulting group part of UMB LLC, the publisher of EE Times.

“Having a [Motorola] product business in this space should provide a strong position for injunctive relief if [Google] pursues litigation,” McLean added.

The deal requires approval from antitrust regulators already examining Google for dominance in other areas such as online advertising. Drummond expressed confidence in getting the approvals given Google’s current lack of a hardware business and the broad industry use of Android.

Posted in Uncategorized | Leave a comment

The hiring problem

By Brian Fuller

YOU’VE PROBABLY HEARD the statistics: The percentage of U.S. companies having difficulty filling open job reqs is north of 60 percent now, up from around 15 percent just last year. Yet, the third-hardest jobs to fill this year are engineering positions, behind skilled trades and sales representatives.
How is that possible, when official unemployment is 9 percent and
unofficial unemployment or underemployment is double that?

Turns out the problem is with the companies themselves—or, more precisely, their expectations. So writes Peter Cappelli, a Wharton professor and director of Wharton’s Center for Human resources, in a recent piece for The Wall Street Journal. Companies, Cappelli argues, expect plug-and-play recruits. That’s just not realistic, and it contributes to what he calls an “inflexibility problem.”

As Cappelli puts it: Finding candidates to fit jobs is not like finding pistons to fit engines, where the requirements are precise and can’t be varied. Jobs can be organized in many different ways, so that candidates who have very different credentials can do them successfully.

Companies don’t train. What Cappelli doesn’t touch on is that companies no longer mentor, either, and I think that’s an even bigger problem. We know this is probably a problem in this industry, because many EE Lifers have been unemployed for more than a year or two, and they’re incredibly experienced. As with everything, a reality check is in order.

What’s your take?

If you’re hiring, are you finding it difficult to find qualified candidates? Is your
company doing less training than it used to? Has it abandoned co-op programs for college engineering students, or are such programs still a valued recruiting tool?

Management has always tried to model employees as plug-and-play lumps of meat; that makes project planning and budgeting much easier. The strategy might have worked in the past, when graduates could expect to emerge from college with a reasonable percentage of industry knowledge under their belts; but with the explosion of technical detail, a graduate is an empty vessel. It’s also
not surprising that companies are doing less training. Years ago people took jobs for life, so training investment made sense; these days, the workforce is fluid.
If you want to work in a specific area, then skill up on that.
— eembedded_janitor

Our society is [adopting] the European mentality. You can’t get a drill bit unless you have a drill bit license. If you need a hole drilled, you must find the person with that license.
— KimChristensen

The inflexibility problem has a name; recruiters call it the “purple squirrel” problem, where a company defines a position with such depth and precision that is impossible to fill, except by some mythical creature like a purple squirrel. In some cases the job posting is just plain stupid, like “Senior USB 3.0 designer, must have 10 years experience designing USB 3.0.”
— Frank Eory

The toughest requirement lately is that typically you must be employed [in order to be considered for employment]. … There’s another hidden issue: Many U.S. companies post jobs only to retain existing H1-B employee candidates. Companies are required to post any H1-B positions as open reqs; hence they pretty much try to exclude anyone that might take a job from an H1-B worker home a company wants to retain (typically for lower wages).
— Underemployed Geek

Posted in Uncategorized | Leave a comment

The Idiap Research Institute introduces the “virtual secretary”

By Hervé Bourlard, Director of the Idiap Research Institute and Andrei Popescu-Belis, Senior Researcher, 19 january 2011

The automatic recognition and transcription of conversational human speech is a long-term goal at the convergence of several disciplines such as signal processing, human language technology, and artificial intelligence. In the nearly twenty years since its foundation in Martigny, the Idiap Research Institute has contributed pioneering research to the challenge of Automatic Speech Recognition (ASR), and continues to be at the forefront of scientific and technical advancement in this area, which has a potentially very large number of applications.

More and more scientific and technological advances are making ASR systems increasingly user-friendly and intuitive depending on the types of applications. High performance ASR systems, even for unconstrained conversational speech, are now within reach, opening up new opportunities for applications. The promising field of ASR presents a range of challenges which Idiap and its partners are addressing through large EU projects, namely AMI (Augmented Multiparty Interaction), and AMIDA, (AMI with Distance Access). These significant efforts aiming to draw user-friendly commercial applications using ASR have been on-going for over 8 years and benefited from over 25 million Euros. Some immediately promising advanced applications using real-time ASR include a “virtual secretary” that suggests relevant reference documents or Web pages during meetings.

Marta Hervé Bourlard and Andrei Popescu-Belis

Challenges for speech recognition: input signals

In optimal conditions, namely with a single speaker using a high-quality microphone in a noiseless environment, the performance of ASR systems has reached levels only slightly below human performance. This is especially the case when using an ASR system that has “learned the user’s voice”, as in the personal dictation systems already developed in the 1990s. However, performance degrades quite rapidly when one or more of the above conditions are not fulfilled: typically, for conversations involving several people (hence possible overlaps in speech), using far-field microphones, and in the presence of non-speech noise. Solving such challenges in the context of multi-party meetings, by developing a functional (and preferably real-time) ASR system, was the long term goal of speech technology in the European AMI Consortium.

The AMI system for large vocabulary conversational speech recognition was developed and tested specifically for the meeting environment, with several possible types of input signals: from individual head microphones, or from multiple microphones on the meeting table, in a microphone array whose geometric configuration is known. Microphone arrays enhance speech signals thanks to beamforming, which is a technique that filters and combines individual microphone signals in order to enhance the audio coming from a particular location in the meeting room. In addition to an improved speech signal, microphone arrays also help finding which participant is speaking at a given moment, an important task known as diarization.

Architecture and results of the ASR system

The AMI ASR system makes use of advanced statistical ASR technology, based on significant exploitation and enhancements of the so-called “hidden Markov models” for acoustic modeling of the pronunciation variability of the lexicon words. The system also uses sophisticated statistical language models referred to as “N-gram” models, which predict the probability of a specific word being pronounced given the N previous ones. For conversational speech recognition in meetings, the number of lexicon words can be as high as 100’000 and N can be as high as 5.

The complete system as used in competitive evaluations operates in no less than ten passes over the input data, exploiting more and more detailed models. The initial pass only serves to obtain a rough transcript to provide input for adapting acoustic models, while the following passes generate bigram word lattices which are expanded using 4-gram language models and rescored using models that are differently trained, for example on varying training data .

Each pass normally outputs both a first-best result and a word-graph: the latter is used to constrain the search space for subsequent stages, and allows for output combination of several complementary models. Depending on the constraints in terms of processing time, the system complexity is usually increased by the number of passes, with gains in later passes that tend however to decrease with the number of passes. Recently, a major achievement has been the design of a real-time version of the ASR system, keeping up with a speaker’s production rate.

The performance of the complete non-real-time system reaches about 25% word error rate on signals from individual head-mounted microphones, an extremely competitive performance assessed in international evaluation campaigns. Accuracy and speed are especially important for commercial applications such as those targeted by Koemei, a young Idiap spin-off. In complete applications though (besides pure dictation systems), the ASR system is in reality only the first processing step of content extraction from spoken conversations, which includes other stages such as diarization, named entity recognition, syntactic chunking, dialogue act segmentation and classification, topic segmentation, and summarization. All these aspects of content extraction contribute to facilitate search in multimedia recordings of events such as conferences, a capacity that is put to work in another Idiap spin-off called Klewel. To improve the utility of ASR for these analyses, future objectives include improving the robustness, speed, and accuracy of the system, as well as dealing with larger or more flexible vocabularies of recognizable words. The addition of new languages, in particular Swiss national ones, is also under work.

Application of ASR to a “virtual secretary”

A very large number of more user-oriented applications have been considered for automatic speech recognition, from dictation-based interfaces replacing keyboards to search-and-retrieval from spoken archives and to human-computer voice-based dialogue. In current work at Idiap, we are also paying particular attention to the integration of content extraction modules into several types of meeting assistants, which are systems that can help meeting participants with various tasks, in close-to real-time (in some cases, delays of several seconds or even minutes may be acceptable). In particular, Idiap has been working on an application of real-time ASR to design a speech-based document retrieval system called the “Automatic Content Linking Device”, but often referred to as a “virtual secretary”.

This prototype answers the well-known need for information access as a secondary activity, for instance at a time when users are involved in a principal activity that does not allow them to use a traditional search interface (with a keyboard, mouse, and display), and does not even allow them to concentrate fully on initiating a search. Such a particular need for secondary search arises during meetings. Often people need further information during a meeting (e.g., previous meeting minutes, Google search results, etc), but they cannot lay their hands on it, at least not during the meeting itself, because searching would require an interruption of the discussion. And yet, producing the right piece of information at the right time can change the course of a meeting.

A careful listener

The Automatic Content Linking Device answers this need by listening to a meeting and searching quietly in the background for the most relevant documents and past meeting segments from a multimedia database, or from the Web. The past meeting segments are made available thanks to offline speech recognition, and the documents include past reports, emails, or presentation slides. The system performs searches at regular intervals over the multimedia databases, with a search criterion that is constructed based on the words that it recognizes automatically from the ongoing discussion using real-time ASR.

The system keeps up-to-date search results ready for whenever someone in the meeting feels the need to consult them, and is also able to indicate which of the recognized words have enabled the retrieval of each document. Participants in the discussion thus only need to decide if they want to explore any further, and possibly introduce in their subsequent discussions, the documents or past meeting fragments retrieved automatically for them. The system can be used privately by each participant, but another approach is to have it used jointly by all participants, on a dedicated projection screen. And, it can also be used to enrich a past recording with documents. Search on demand at a given moment, as opposed to regular intervals, is also possible.

While other query-free systems for just-in-time retrieval have been proposed in the past, the Idiap system is the first one that is implemented in the context of human conversations, based on ASR and keyword spotting. Moreover, it is also the first system to give access to processed multimedia recordings, documents and websites at the same time, in a fully autonomous way.

The Automatic Content Linking Device is a joint achievement that was coordinated by Idiap within the European AMI Consortium and the ongoing Swiss National Centre of Competence in Research (NCCR) on “Interactive Multimodal Information Management” (IM2). The system is composed of several modules, which were completed partly at Idiap and partly at collaborating institutions. The first prototype was designed in 2008, and since then several versions have been demonstrated at academic or user-oriented events.

The prototypes have received positive verbal evaluation from potential industrial and academic partners, who proposed additional application scenarios that were of interest to them, and provided useful feedback and suggestions for future work. The most recent version of the system is being installed in one of the collaborative spaces of the EPFL Rolex Learning Center, in collaboration with the CRAFT team. The goal is to assist and stimulate discussions in these rooms, in an education-oriented perspective, while integrating the speech capture and information suggestion functions into the own specific architecture of the rooms.

IDIAP’S SPINOFF

The virtual video editor of Klewel

Founded in 2008 as a spin-off of IDIAP, Klewel provides solutions for capturing, searching and sharing the information contained in multimedia digital recordings of conferences. Its system can handle multiple cameras, one or multiple audio channels (e.g. the original speech and its interpretation in several foreign languages) and the projected slides. The system is completely non intrusive as data is captured directly from the sources and synchronized. The speakers do not need to provide Klewel with any original slides. Once the capture step is performed, all the data is uploaded to servers to be processed. This solution automatically references the full content of presentations. The content of the slides is automatically indexed. Multimedia files are encoded into a format suitable for the web. These presentations are quickly published and then fully accessible from an Intranet or a website. Any interested parties can immediately access and retrieve specific information without having the need to play back the full presentation. Search using keywords will retrieve relevant information from all archived presentations, across multiple conferences if needed.

The virtual clerk of Koemei

Thanks to the cutting-edge research in ASR conducted by IDIAP and its partners, Koemei, a spin-off of the Martigny-based institute incorporated in 2010, provides an advanced solution that automatically transcribes conversational speech into text – thereby opening up a wide range of new computer applications based on spoken human language. The cloud-based speech recognition solution of the young company is specifically designed for multiparty conversations. Koemei targets particularly the transcription markets for meeting recording, lecture capture, videoconferencing, telepresence, multimedia indexing, speech mining and analytics, search engine optimization and voicemail-to-text. Its technology enables global businesses, government agencies, educational institutions, telecom operators, professional service providers and multimedia organizations to use speech to power multiple mission-critical applications and services. Moreover, third-party application developers can access this speech recognition platform to develop specific solutions.

from

Posted in Uncategorized | Leave a comment

Podcastle – searching podcasts

PodCastle is a service that enables searching of speech data such as podcasts, individual audio or video files on the web, and video clips on video sharing services (Nico Nico Douga, YouTube, and Ustream). Speech data are converted to text data by using an automatic speech recognition technology. All users accessing the PodCastle service can correct speech recognition errors

<from http://en.podcastle.jp/&gt;

Posted in Uncategorized | Leave a comment

Wolfson Electronics buys Australia’s Dynamic Hearing

21 September 2011 Last updated at 13:42

Edinburgh-based Wolfson Microelectronics is to buy the Australian software company, Dynamic Hearing, in a deal worth £3.3m. Dynamic Hearing provides technology for mobile phones, Bluetooth headsets and hearing aids. Wolfson has been working with Dynamic Hearing for 18 months developing equipment to run on its Audio Hub products. The company employs about 370 people in 12 locations across the world. Commenting on the acquisition, Mike Hickey, chief executive of Wolfson, said: “We are delighted to welcome Dynamic Hearing’s highly skilled team of employees into the Wolfson family. “This acquisition secures important intellectual property, adds to our customer base and supports our leadership position in delivering HD Audio solutions for the consumer electronics market.” In June, Wolfson – which supplies Blackberry, Samsung and sat-nav maker Tom Tom – saw its share price fall by more than a quarter after announcing sales in the previous three months had been lower than expected.

<from http://www.bbc.co.uk/news/uk-scotland-scotland-business-15005906&gt;

Posted in Uncategorized | Leave a comment

When algorithms control the world

23 August 2011 Last updated at 01:42

By Jane Wakefield

If you were expecting some kind warning when computers finally get smarter than us, then think again.

There will be no soothing HAL 9000-type voice informing us that our human services are now surplus to requirements.

In reality, our electronic overlords are already taking control, and they are doing it in a far more subtle way than science fiction would have us believe.

Their weapon of choice – the algorithm.

Behind every smart web service is some even smarter web code. From the web retailers – calculating what books and films we might be interested in, to Facebook’s friend finding and image tagging services, to the search engines that guide us around the net.

It is these invisible computations that increasingly control how we interact with our electronic world.

At last month’s TEDGlobal conference, algorithm expert Kevin Slavin delivered one of the tech show’s most “sit up and take notice” speeches where he warned that the “maths that computers use to decide stuff” was infiltrating every aspect of our lives.

Among the examples he cited were a robo-cleaner that maps out the best way to do housework, and the online trading algorithms that are increasingly controlling Wall Street.

“We are writing these things that we can no longer read,” warned Mr Slavin.

“We’ve rendered something illegible. And we’ve lost the sense of what’s actually happening in this world we’ve made.”

Million-dollar book

Algorithms may be cleverer than humans but they don’t necessarily have our sense of perspective – a failing that became evident when Amazon’s price-setting code went to war with itself earlier this year.

“The Making of a Fly” – a book about the molecular biology of a fly from egg to fully-fledged insect – may have been a riveting read but it almost certainly didn’t deserve a price tag of $23.6m (£14.3m).

It hit that figure briefly on the site after the algorithms used by Amazon to set and update prices started outbidding each other.

It is a small taste of the chaos that can be caused when code gets smart enough to operate without human intervention, thinks Mr Slavin.

“This is algorithms in conflict without any adult supervision,” he said.

As code gets ever more sophisticated it is reaching its tentacles into all aspects of our lives, including our cultural preferences.

The algorithms used by movie rental site Netflix are now responsible for 60% of rentals from the site, as we rely less and less on our own critical faculties and word of mouth and more on what Mr Slavin calls the “physics of culture”.

Leading role

British firm Epagogix is taking this concept to its logical conclusion, using algorithms to predict what makes a hit movie.

It takes a bunch of metrics – the script, plot, stars, location – and crunches them all together with the box office takings of similar films to work out how much money it will make.

The system has, according to chief executive Nick Meaney, “helped studios to make decisions about whether to make a movie or not”.

In the case of one project – which had been assigned a £180m production cost – the algorithm worked out that it would only take £30m at the box office, meaning it simply wasn’t worth making.

For another movie, it worked out that the expensive female lead the studio had earmarked for a film would not yield any more of a return than using a less expensive star.

This rather clinical approach to film-making has irked some who believe it to be at odds with a more creative, organic way that they assume their favourite movies were made.

Mr Meaney is keen to play down the role of algorithms in Hollywood.

“Movies get made for many reasons and it credits us with more influence than we have to say we dictate what films are made.

“We don’t tell them what the plot should be. The studio uses this as valuable business information. We help people make tough decisions, and why not?” he said.

Despite this, the studio Epagogix has worked with for the last five years does not want to be named. It is, says Mr Meaney, a “sensitive” subject.

Secret sauce

If algorithms had a Hollywood-style walk of fame, the first star would have to go to Google.

Its famously secret code has propelled the search giant to its current position as one of the most powerful companies in the world.

No-one would doubt that its system has made searching a whole lot easier, but critics have long asked at what price?

In his book, The Filter Bubble, Eli Pariser questions how far Google’s data-crunching algorithm go in harvesting our personal data and shaping the web we see accordingly.

Meanwhile, a recent study by psychologists at Columbia University found that reliance on search engines for answers is actually changing the way humans think.

“Since the advent of search engines, we are reorganising the way we remember things. Our brains rely on the internet for memory in much the same way they rely on the memory of a friend, family member or co-worker,” said report author Betsy Sparrow.

Increasingly, she argues, we are knowing where information can be found rather than retaining knowledge itself.

Flash crash

In the financial markets, code is increasingly becoming king as complex number-crunching algorithms work out what to buy and what to sell.

Up to 70% of Wall Street trading is now run by so-called black box or algo-trading.

That means, along with the wise guy city traders, banks and brokers now employ thousands of smart guy physicists and mathematicians.

But even machine precision, supported by the human code wizards, doesn’t guarantee things will run smoothly.

In the so-called Flash Crash of 2.45 on May 6 2010, a five minute dip in the markets caused momentary chaos.

A rogue trader was blamed for the 10% Dow Jones index fall but in reality, it was the computer program that the unnamed trader was using that was really to blame.

The algorithm sold 75,000 stocks with a value of £2.6bn in just 20 minutes, causing other super-fast trading algorithms to follow suit.

Just as a bionic limb can extend a human’s capability for strength and stamina, the electronic market showed its capacity to exaggerate and accelerate minor blips.

No-one has ever managed to pinpoint exactly what happened, and the market recovered minutes later.

The chaos forced regulators to introduce circuit breakers to halt trades if the machines start misbehaving.

The algorithms of Wall Street may be the cyber-equivalent of the 80s yuppie, but unlike their human counterparts, they don’t demand red braces, cigars and champagne. What they want is fast pipes.

Spread Networks has been building one such fibre-optic connection, shaving three microseconds off the 825-mile (1327km) trading journey between Chicago and New York.

Meanwhile, a transatlantic fibre optic link between Nova Scotia in Canada and Somerset in the UK is being built primarily to serve the needs of algorithmic traders and will send shares from London to New York and back in 60 milliseconds.

“We are running through the United States with dynamite and rock saws so an algorithm can close the deal three microseconds faster, all for a communications system that no humans will ever see,” said Mr Slavin.

As algorithms spread their influence beyond machines to shape the raw landscape around them, it might be time to work out exactly how much they know and whether we still have time to tame them.

<from http://www.bbc.co.uk/news/technology-14306146&gt;

Posted in Uncategorized | Leave a comment

Hewlett Packard to exit computing and buy Autonomy

18 August 2011

Hewlett Packard has confirmed plans to stop making PCs, tablets and phones, in order to refocus on software.

It has also emerged that the US company has agreed to buy UK software firm Autonomy for £7.1bn ($11.7bn).

HP added that it was considering selling its personal systems group, which includes the world’s biggest PC-making business, and that it will discontinue its webOS devices.

The webOS operating system is used in its tablet computers and smartphones.

The announcements mark a significant U-turn for the company, which announced in a March strategic review that it would integrate webOS into all of its future hardware.

HP had launched its Pre smartphone as a competitor to the iPhone and devices based on Google’s Android operating system.

However, WebOS failed to gain traction with reviewers, operators and retailers.

The decision to ditch the Pre, as well as its TouchPad tablet computers, comes despite paying $1.2bn (£727m) last year to buy up the technology through its acquisition of Palm.

There have been long-running rumours that chief executive Leo Apotheker, who recently joined from German rival SAP, wanted to refocus the company away from its traditional hardware business towards its smaller, but much more profitable, software lines.

The transformation planned by Mr Apotheker mirrors that of IBM, which dropped out of its traditional hardware business over the past decade.

“HP is recognising what the world has recognised, which is hardware in terms of consumers is not a huge growth business anymore,” said Michael Yoshikami, chief executive of YCMNET Advisors.

“It’s not where the money is. It’s in keeping with the new CEO’s perspective that they want to be more in services and more business-oriented.”

On the sale of its PC business, HP said it “will consider a broad range of options that may include, among others, a full or partial separation… from HP through a spin-off or other transaction”.

Market rumours have previously named various private equity firms as being keen to buy parts of HP if a break-up of the company were to happen.

<from http://www.bbc.co.uk/news/business-14584428&gt;

Posted in Uncategorized | Leave a comment

Nujira opens IC design center in Edinburgh

from Peter Clarke

8/1/2011 11:54 AM EDT

LONDON – Nujira Ltd., a provider of “envelope-tracking” technology as a power saver in cellular telephone power amplifiers, has announced it has been awarded £175,000 (about $290,000) funding from Scottish Development International to open an IC design center in Edinburgh, Scotland.

The design center has opened with a small group that could expand to 20 people in time. The multidisciplinary IC development team focused on handsets and will specialize in switched mode power supply controller and amplifier designs, Nujira (Cambridge, England) said.

“Our vision is to install hundreds of millions of Coolteq ICs into energy-efficient 3G and 4G devices means that we need to rapidly expand our IC design capabilities,” said Tim Haynes, CEO of Nujira, in a statement.

Nujira announced additional investment of $16 million in May 2001 that would allow it to expand its handset IC design team.

<from http://www.eetimes.com/electronics-news/4218388/Nujira-IC-design-center-Edinburgh&gt;

Posted in Uncategorized | Leave a comment

IBM and ETH Zurich open the Binnig and Rohrer Nanotechnology Center

Celebrating its 100th anniversary of existence as well as commemorating the 30th anniversary of the scanning tunnelling microscope (STM) invented by Nobel laureates Gerd Binnig and Heinrich Rohrer at the IBM Zurich Research Lab in 1981, IBM opened the Binnig and Rohrer Nanotechnology Center on its research campus near Zurich.

<see http://eetimes.eu/en/ibm-and-eth-zurich-open-the-binnig-and-rohrer-nanotechnology-center.html?cmp_id=7&news_id=222907447 for more>

Posted in Uncategorized | Leave a comment