10 Reasons Employees Really Care About Their Jobs

Want your employees to really care? First you have to really care about them.

What makes a job more than just a list of duties?

Caring.

Pay is important. Benefits are important. But pay and benefits are also expected; what makes employees go the extra mile is the feeling of belonging to a team, of pursuing a higher purpose, of working with people who care about them as people, not as employees. When that happens, employees want to come to work. Work is more fun. Work is more rewarding.

Work has meaning when we care.

Want your employees to care about your business? First care about them–and show it by:

1. Providing freedom.

Detailed internal systems are important, but unique people create unique experiences. Smart companies allow their employees to be individuals. Obvious example: Zappos, a company that sets overall guidelines and then allows employees to express their individuality within those guidelines.

Assigning authority is important, but true responsibility comes from feeling not just in charge but encouraged and empowered to do what is right–and to do what is right in the way the individual feels is best.

Give me a task to do and I’ll do it. Tell me it’s mine, and tell me to use my best judgment to get it done, and I’ll embrace it. I’ll care, because you trust me.

And I’ll trust you.

2. Setting logical expectations.

Only one thing is worse than being criticized for doing something you thought you were supposed to do: not knowing what to do.

While it might sound contradictory, freedom and latitude are important but so are basic, understandable expectations. Good companies create and post best practices. Great companies absorb best practices, almost organically, because their employees can easily understand why certain decisions and principles make sense.

When you create a guideline or process, put four times as much effort into explaining why as you do explaining what.

Tell me what to do and I’ll do it. Tell me why and I’ll embrace it–and in the process care a lot more about doing it well.

3. Building a true sense of team.

Go to any swim meet or track meet and you’ll see it happen: Kids swim or run faster in relays than they do in individual events. They know other people are counting on them–and they don’t want to let them down.

Everyone loves to feel that sense of teamwork and esprit de corps that turns a group of individuals into a real team. The key is to show how each person’s effort impacts other people, both at the team level and more broadly throughout the company.

Great companies help employees understand how their efforts impact others, especially in a positive way. We all work hard for our boss, but we work harder for the people beside us–especially when we know they count on us.

4. Fostering a unique sense of purpose.

Just like we all want to feel part of a team, we all like to feel a part of something bigger than ourselves.

Feeling a true purpose starts with knowing what to care about and, more importantly, why to care.

Your company already has a purpose. (If it doesn’t, why are you in business?) But a step farther and let your employees create a few purposes of their own, for your customers or the community.

You may find that what they care about becomes what you care about–and in the process makes your company even better.

5. Encouraging genuine input.

Every employee has ideas, and one of the differences between employees who care and employees who do not is whether they are allowed to share their ideas, and whether their ideas are taken seriously. Reject my ideas without consideration and I immediately disengage.

Great companies don’t just put out suggestion boxes. They ask leading, open-ended questions. They don’t say, “Should we do this, or this?” They say, “Do you know how we could make this better?” They probe gently. They help employees feel comfortable proposing new ways to get things done.

And when an idea isn’t feasible, they always take the time to explain why–which often leads to the employee coming up with an even better idea.

Employees who provide input clearly care about the company because they want to make it better. Make sure that input is valued and they will care even more, because now it’s not your company–it’s our company.

6. Seeing the person inside the employee.

We all hope to work with people we admire and respect.

And we all hope to be admired and respected by the people we work with. We want to be more than a title, more than a role. We want to be a person, too.

That’s why a kind word, a quick discussion about family, a brief chat about the triathlon I just finished or the trip I just took or the hobby I just started–those moments are infinitely more important than any meeting or performance evaluations.

I care about you when you care about me–and the best way to show you care is to show, by word and action, that you appreciate me as a person and not just an employee.

7. Treating each employee not just equally but fairly.

Every employee is different. Some need a nudge. Others need regular confidence boosts. Others need an occasional kick in the pants.

Some employees have earned greater freedom. Others have not.

Equal treatment is not always fair treatment. Employees care a lot more when they know a reward or discipline is, under unusual circumstances, based on what is right, not just what is written.

8. Dishing out occasional tough love.

Even the best employees make mistakes. Even the best employees lose motivation.

Even the best employees occasionally need constructive feedback. Sometimes they even need a reality check, to know they are not just letting the company down but are letting themselves down. (A boss once shook his head and said, “You’re better than that.” I was crushed, and vowed to prove he was right.)

In the moment an otherwise great employee may hate a little tough love, but in time will realize you cared enough to want her to achieve her goals and dreams.

9. Dishing out frequent public praise.

Just like every employee makes mistakes, every employee also does something well. (Yes, even your worst employee.)

That means every employee deserves some amount of praise. So do it. Find reasons to recognize average performers. Find ways to recognize relatively poor performers. Sometimes all it takes for an employee to turn a performance corner is a little public recognition. Some will want to experience that feeling again; others will want to live up to the faith you show in them.

Public praise shows you care, and that’s reason enough–but it also gives employees another reason to care.

10. Creating opportunities.

When does a job most become just a job? When there is no possibility of that job leading to greater things, inside or even outside the company. When there’s no hope, it’s just a job.

Every employee wakes up every day with the hope of a better future. Show them you care by helping create a path to that future.

Good companies assume their employees will benefit when their company grows. Great companies understand that building a better future for the company is directly dependent on building a better future for their employees.

First show you really care about your employees; only then will start to really care about your company.

That way everyone wins–and isn’t that the kind of company you really want to build?

 

<from http://www.inc.com/jeff-haden/10-reasons-employees-really-care-about-their-jobs.html&gt;

Posted in Uncategorized | Leave a comment

March Madness: The Top 10 Dumbest Hiring Mistakes Smart People Make

“,,,, if you don’t know what you’re looking for, you’ll use some lame excuse to justify how you found it.”

Over the past 40 years, I’ve interviewed over 10,000 people for hundreds of different jobs, from entry-level to CEO. As part of this, I’ve debriefed over 1,000 managers and tracked the subsequent performance of the people they hired and didn’t hire. Based on this, I can safely conclude these are the top 10 classic hiring mistakes:

  1. Using Presentation Skills to Predict Performance. Too many interviewers overvalue the candidate’s appearance, affability, assertiveness and how articulate the person is. These “Four A’s” don’t predict performance, all they predict is the likelihood the wrong person will be hired.
  2. Instantaneous Judgmentitis aka “Cherry-picking” Syndrome. Once a yes/no hiring decision is made (often in a few minutes) the balance of the interview is used to seek out information to confirm the initial flawed decision. For those candidates in the “yes” group, the tough questions are avoided, and for those receiving a quick “no” the toughest ones are asked. The problem can be minimized by waiting at least 30 minutes before making any hint of a yes or no decision.
  3. Using Hard Skills to Predict Performance. It’s what people do with what they have that makes them successful, yet most interviewers focus more on the depth of the having rather than the quality of the doing. It’s better to first determine how these skills are used on the job, and then use the one-question interview to figure out if the person has done what needs to be done.
  4. Thinking Soft Skills are Too Soft to Matter. Collaborating with other people in other functions, meeting challenging deadlines, changing priorities, making business tradeoffs, obtaining resources, and the like, are too important to be called soft. Yet most interviewers spend too little time on how these non-technical skills drive performance.
  5. Missing the Forest for the Trees. If you’ve ever hired someone who’s partially competent, you’ve experienced this problem. Technical people focus too much on technical brilliance and not enough on how these skills are used on the job. Intuitive people rely on a narrow range of abilities, like assertiveness and intellectual horsepower, and assume global competency. The problem can be minimized by preparing a performance-based job description defining the top 4-5 things a person needs to do to be successful. Then put them in priority order and get everyone on the interviewing team to agree. Combine this with the one-question Performance-based Interview and you’re unlikely to make this mistake again.
  6. Gladiator Voting. Putting a bunch of interviewers in the same room and deciding to hire or not hire someone by adding up the yes/no votes is a recipe for hiring the wrong person. Sharing evidence around the factors that drive success is the key to an accurate assessment. Here’s a scorecard we recommend using to collect the objective evidence needed to make an accurate assessment. When there is a wide variance of opinion around each factor, you can safely assume your company’s interview process is based on something other than the candidate’s ability to do the work that needs to be done.
  7. The Safety of No. A no vote is easier to make since those that invoke it can never be proved wrong. A “no” also rewards the weakest and the most conservative interviewers, since neither has enough information to vote yes. Worse, one no vote can override 2-3 yes votes, especially if the person voting no has more authority. This is why the talent scorecard approach mentioned above is more effective.
  8. Misreading Motivation. Motivation to do the job is essential to job success. However, doing the job is not the same as motivation to get the job. Being prepared, being on time, doing company research, or responding “correctly” to the question, “why do you want this job?” are terrible predictors of real motivation. Unfortunately, too many interviewers are seduced by these superficial displays of interest. The one-question Performance-based Interview will reveal what really motivates candidates to excel.
  9. Ignoring Situational Fit. Even if you overcome all of these relatively easily preventable hiring mistakes and measure true ability, there is one issue that is often overlooked. If the candidate isn’t highly motivated to do the actual work that needs to get done, doesn’t mesh with the hiring manager’s style, or can’t thrive in the company culture (i.e., pace, decision-making process, approach to collaboration, level of sophistication, level of support and resources available) success is problematic.
  10. Asserting the Wrong Consequent. An example best illustrates this problem. Most interviewers falsely assume that the best sales reps make good first impressions. With this viewpoint, many compound the error by concluding that everyone who makes a good first impression will be a good sales rep. (This is an example of “asserting the consequent” logic problem.) What I’ve discovered is that the only common characteristic among the best sales people is a track record of great sales performance. When I find a great sales rep who makes less than a stellar first impression, I’ve discovered the person works harder than everyone else. You can apply this same principle to any job where there’s a belief that first impressions matter. What matters is a track record of past performance doing what you need to get done.

Don’t take shortcuts when it comes to hiring. This starts by defining what you need done. If you skip this step you’re likely to fall prey to one or more of the common hiring traps described here. As someone once told me, “If you don’t know what you’re looking for, you’ll use some lame excuse to justify how you found it.”

_____________________

Lou Adler (@LouA) is the CEO of The Adler Group, a consulting and search firm helping companies implement Performance-based Hiring. He’s also a regular columnist for Inc. Magazine and BusinessInsider. His latest book, The Essential Guide for Hiring & Getting Hired (Workbench, 2013), provides hands-on advice for job-seekers, hiring managers and recruiters on how to find the best job and hire the best people.

 

<from https://www.linkedin.com/today/post/article/20140321053539-15454-march-madness-the-top-10-dumbest-hiring-mistakes-smart-people-make&gt;

Posted in Uncategorized | Leave a comment

We’re Being Driven to Distraction by Clamorous Computing

By Paul McFedries

The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it…Today’s multimedia machine makes the computer screen into a demanding focus of attention rather than allowing it to fade into the background.

Mark Weiser, Scientific American, September 1991

In 1988, xerox parc computer scientist (and later CTO) Mark Weiser put forward the idea of—and coined the term—ubiquitous computing. Sometimes shortened to ubicomp, it refers to the seamless integration of computing resources into most of the objects that people use to perform the activities of daily life. Today we’re more likely to call it pervasive computing or everyware. We’re not quite there, despite newfangled appliances such as smart TVs and smart refrigerators, but modern computing does have a pervasive feel to it. That feel comes mostly from the gadgets like smartphones and tablets (and soon, wearables like Google Glass) that we routinely carry around with us. Thanks to cellular connections and Wi-Fi networks, we have near-constant access to computing power and online data, giving us what might be called near-ubiquitous computing. It’s not quite the ambient intelligence envisioned by ubicomp fans, but it’s a step or three in that direction.

There’s a problem, however. One of the chief characteristics of true ubicomp is that it is a calm technology, meaning that it remains in the background until needed and thus enables us to interact with it in a calm, engaged manner. Today’s mobile computing platforms are more like jittery technology, constantly beeping at us and alerting us to new messages, posts, updates, and news. (Hence, perhaps, the curious prevalence of phantom vibration, the perception of a cellphone’s vibration in the absence of an incoming call or notification.) Even watching TV is no longer simple as more and more people use their mobile tech for second screening (monitoring social media commentary about the show they’re watching) and chatterboxing (chatting online with people watching the same show).

If it’s by now axiomatic that even as we change technology, it changes us, then we have to wonder how we’re being altered by this constant connectivity. On the positive side, having so much information fingertip ready is a boon for productivity and the quick settling of bar bets. On the downside, all this digital hectivity leads to FOMO, the fear of missing out on something interesting or fun, which can lead to obsessive checking of social networks. We like to think we’re capable of being polyattentive (watching or listening to more than one thing at a time), but it’s more like what Microsoft researcher Linda Stone calls continuous partial attention, where we’re ostensibly focused on some task but a chunk of our attention is waiting for something more important to pop up. It’s no wonder many people suffer from nomophobia, the fear of being without a mobile phone or without a cellular signal. Our phones and tablets have become weapons of mass distraction.

The result is the shortened attention span that writer Nicholas Carr identified in his famous 2008 essay, “Is Google Making Us Stupid?” We’ve become self-interrupters who now routinely suspend our own work to check social media or watch the latest viral video. Conveniently, it looks like we’re being productive members of society, but in reality our focus on the trivial and the fleeting means we’re just being fauxductive. We’re social notworking. True, our brains are engaged, but not always in a good way. We suffer from busy brain, a mental state that includes racing thoughts, anxiety, lack of focus, and sleeplessness. We indulge in binge thinking, where we overthink problems or think obsessively but fruitlessly over a short period.

Ubiquitous computing remains a technophile’s dream—and a utopian one at that, thanks to its vision of technology waiting in the background, not speaking unless spoken to. In its stead we have ubiquitous connectivity—always on, always interrupting, always in your face. And there’s nothing calm about that.

This article originally appeared in print as “Clamorous Computing.”

 

<from http://spectrum.ieee.org/computing/embedded-systems/were-being-driven-to-distraction-by-clamorous-computing&gt;

Posted in Uncategorized | Leave a comment

From the SLTC Chair – on the impact factor

John H.L. Hansen

SLTC Newsletter, May 2012

Welcome to the next installment of the SLTC Newsletter of 2012. In this SLTC Chair update, I would like to spend a little time covering several related aspects on citations and impact which have come up based on (i) a proposal by ICASSP-2013 to move to a 5-page optional format, and (ii) trends from IEEE Trans. Audio, Speech and Language Processing and ICASSP papers.

In our field of signal processing, and in particular speech and language processing, there is an expectation that high quality research should appear and be recognized in both our premier conferences (for IEEE within the Signal Processing Society it is ICASSP – Inter. Conf. Acoustics, Speech, and Signal Processing) and journals (again, for speech and language researchers, that would be the IEEE Trans. on Audio, Speech and Language Processing). The notion of “impact factor” is something that has grown to be a symbol of the quality of a journals and conferences. As such, there is more interest within the IEEE Signal Processing Society to increase the impact factor of our journals and conferences. There is also an ongoing discussion as to the acceptance rate of papers being submitted to IEEE ICASSP, with the notion that a higher rejection rate means a higher quality conference (some of my colleagues in the Computer Science field have conferences that have an acceptance rate of 15-20%). This of course assumes that the quality of papers submitted to all conferences have a equivalent quality distribution, which of course is not the case (some even argue that their conferences are more prestigious because the acceptance rates of some conferences are lower than the acceptance rates of their journals – this argument is clearly flawed, since authors are generally able to self-select and not submit potential manuscripts to journals at the same rate and quality as what might be submitted to conferences). However, in this letter, I would prefer to focus on journal impact factor and not acceptance rate for conferences and leave that for another time.

So, here, I offer some comments on impact factor and citations. Let me be up front here to say, I am by no means an expert in this domain, and one only need to go to the various online search engines to find many publications on their effectiveness – or lack of effectiveness, of impact factors as a measure of quality, either of the journal, the papers appearing in those journals, or the authors of those papers.

So, where did this “impact factor” come from? In an article by Brumback [2], he notes that Eugene Garfield, who was an archivist/library scientist from Univ. of Penn., developed a metric that could be used to select journals to be included in a publication being considered called the “Genetics Citation Index”[1]. This Citation Index was the initial model for what is now known as the Science Citation Index, which it should be noted was commercialized by Garfield’s company, named the Institute for Scientific Information (ISI) (as noted by Brumback[2]). As can be found in a number of locations on the web, this “impact factor” metric for journals is calculated “based on 2 elements: the numerator, which is the number of citations in the current year to any items published in a journal in the previous 2 years, and the denominator, which is the number of substantive articles (source items) published in the same 2 years.”[2,3,4,5].

Unfortunately, as Seglen[3] points out, the journal impact factor has migrated into a singular rating metric that is being used to determine not just the value of particular journals, but also the quality of scientists, whether someone should be hired at a University – research laboratory or company, the quality of a University, or even the quality of the scientific research produced by an individual or research group. The impact factor is also being used by promotion committees in many universities, as well as “committees and governments in Europe and to a lesser extent in North America” in making decisions as to whether to award grants/contracts, as well as promotion and tenure for individual faculty/scientists.

It should be clear to signal processing researchers that evaluating scientific quality is a difficult problem which has no clear standard solution (in some sense, the medical field is ahead of engineering, since the debate on impact factor and criteria has been an open area of active discussion for many years) . In an ideal world, published scientific results would be assessed in terms of quality by true experts in the field and given independent quality as well as quantity scores which would be agreed according to an established set of rules. Instead, IEEE uses our peer review process based on review committees to assess the quality of a paper (i.e., the review process for ICASSP as well as for IEEE Transactions on ASLP papers). Of course when outside scientists seek to assess the impact of our publications, these assessments are generally performed by general committees who are not experts in speech and language processing, and will therefore resort to secondary criteria such as raw publication counts, perceived journal prestige, the reputation of particular authors and institutions. Again, as scientists and engineers, we generally seek a solution which is also has a scientific basis, is quantitative and repeatable, and ultimately removes as much subjective influence as possible. Unfortunately, the importance of a research field sometimes is associated with the impact factor generated by the journals in that sub-discipline, and can therefore wrongly skew the general public’s impression on the importance of someone’s research.

As noted in Wikipedia [5], the “impact factor is highly discipline-dependent”. The percentage of total citations occurring in the first two years after publication varies highly among research areas, where for some, it is as low as 1-4% in areas like math and the physical sciences, while for biological sciences it can be as high as 5-8% [5]. A study by Chew, Villanueva, and Van Der Weyden [4] in 2007 considered the impact factor of seven medical journals over a 12-year period. They found that impact factors increased for the journals due to either (i) the numerators increasing, or (ii) the denominators decreasing (not surprising!) to varying degrees. They interviewed Journal Editors to explore why such trends were occurring, and a number of reasons were noted, which included deliberate editorial practices. So clearly, the impact factor is vulnerable to editorial manipulation at some level in many journals, and there is a clear dissatisfaction with it as the sole measure of journal quality. It clearly does not make sense to claim that all articles published in a journal are of similar quality, and even Garfield [1], who originated the impact factor, states that it is incorrect to judge an article by the impact factor of the journal.

So where does that leave either IEEE ICASSP or IEEE ASLP (for those of us in Speech and Language Processing)? If one visits the IEEE Transactions on Audio, Speech and Language Processing website[6], you will see the impact factor is 1.668. A ranking of the top journals in electrical and electronic engineering [7] shows the following top 10 journals (in 2004), where the IF ranges from 2.86 to 4.35 for a single year. While the top 10 journals ranked by Impact in the area of Surgery by Sci-Bytes [8] range from 4.06 to 7.90. Still, the top 10 journals in the area of material science have impact factors from 4.88 to 20.85 [9].

[7] Journals Ranked by Impact Factor: Electrical & Electronic Engineering http://www.in-cites.com/research/2006/january_30_2006-1.html

[8] Sci-Bytes> Journals Ranked by Impact: Surgery (Week of July 4, 2010); http://sciencewatch.com/dr/sci/10/jul4-10_2/

[9] Impact factor of journals in Material Science; http://sciencewatch.com/dr/sci/09/may24-09_1/

IEEE ICASSP-2012: So the reason for discussion on impact factor here stems from discussions at ICASSP-2012, and a proposal put forth by the IEEE ICASSP-2013 organizing committee. That proposal asked individual Technical Committees to consider allowing an optional 5th page to be added to the regular 4-page ICASSP paper, which has existed for +30 years. The proposal was suggested in order to increase the number of citations included in each ICASSP paper. You might be surprised to know that the number of citations per ICASSP paper has dropped from 5.8 (in 2006, with a total of 11020 citations) to 2.8 (in 2009, with a total of 5239 citations). Papers published in the ACL are 8 pages and averaged 16.6 citations per paper in 2009. Interestingly, the “shortness” of the citations is also limited to time as well, with most citations occurring recently, and fewer ICASSP authors citing seminal work from more than 2-3 years in the past (I realize this is a generalization, and simply suggest this is occurring but not across all papers). So, this proposal to increase the page count from 4 to 5 was considered by the SLTC and initially turned down, primarily due to the fact that the fundamental flaw here is not a lack of space, but an increasing trend for authors/researchers to commit less real-estate to their references.

At the SLTC meeting at ICASSP-2012, we renewed this discussion and came to the consensus that SLTC would support an optional 5-page format for ICASSP-2013, with the condition that (i) only references would be allowed on this 5th page (owing to the fact that Speech and Language Processing routinely receives about ¼ of the ICASSP submissions – generally around 700 papers to review, and we did not want to tax our expert reviewers further), and (ii) that the IEEE ICASSP Conference Paper template have a section before the final “Conclusions/Summary” section that is entitled “Relation to Previous Work”. In this section, it is expected that authors should specifically point to prior work that has been done and to differentiate how their ICASSP contribution either builds on prior work or differentiates their work from prior studies. The SLTC believes this will better help address the problem of not relating the contributions to previous studies.

Finally, where possible, it is important to cite IEEE Transactions papers in place of, or in addition to, a previous ICASSP conference paper. If folks have any comments on this proposal, please let the SLTC know you opinions (or communicate them to your own Technical Committees within the IEEE Signal Processing Society if you focus on topics outside of speech and language processing).

In closing, we hope that everyone who attended IEEE ICASSP in Kyoto enjoyed the conference and came away with new ideas, new knowledge, new friends/colleagues, and new connections with other researchers/laboratories. I will say that I thoroughly enjoyed the conference and commend the outstanding organizational accomplishments of the ICASSP 2012 Organizing Committee (really a flawless superb meeting!) as well as CMS for their excellent handling of logistics (I have a saying that if “everything goes well, it just doesn’t happen that way – someone really sweated the details and made sure everything would go smoothly”; the ICASSP-2012 Organizing Committee should be commended for an outstanding job!).

It is now May and only six more months until the next due date (November 19, 2012) for ICASSP-2013 in Vancouver, Canada (which takes place May 26-31, 2013)!

 

from <http://www.signalprocessingsociety.org/technical-committees/list/sl-tc/spl-nl/2012-05/from-the-chair/&gt;

Posted in Uncategorized | Leave a comment

Ready for recording Settlers of Catan with the DMMA.2 and DMMA.3

Picture of the IMR here at the CSTR ready for the recording.

image

Posted in Uncategorized | Leave a comment

Data Breaches: When the Lawyers Get Involved

We all know that data breaches are situations businesses encounter can get extremely complex. State laws start to take hold around breach disclosure, expensive forensics specialists are needed to re-engineer how attacks and/or mishandling of sensitive information occurred… and now, the lawyers are jumping into the fray.

Data breaches have become big business for many law firms. Some might see it as ambulance chasing. And while it might cost breached companies a pretty penny to hire a large law firm to represent them, those costs could pale in comparison to what they might have to pay in fines and customer law suits, if they don’t have solid representation.

An interesting article in Monday’s Wall Street Journal described the newfound opportunities by the law industry, as they are positioning their cybersecurity know-how to attract new clients.

But it’s not just a cash-grab by the lawyers — an interesting example was described where companies are starting to loop their attorneys in at the first hint of a data breach. This way, the attorney-client privileges kick in immediately, they can pre-empt a potential influx of lawsuits by just taking a few simple steps:

  1. Once you have hired a law firm that has some expertise in data breaches, the law firm hires the forensics investigators. This way, the investigatory folks are beholden to the law firm, and cannot, by law, report anything they are finding as they navigate that company’s systems along the path of the breach.
  2. The law firms help navigate the myriad of state data breach disclosure laws, of which there are 27 now. This ensures that they are disclosing only what they need to legally, to their publics, regulatory bodies and customers.
  3. It prevents the breached company from being subjected to multiple law suits in the event they do not hire counsel to oversee the investigation. As an example, if a governing body appoints the forensics company to investigate post-breach, and the breached organization isn’t represented, there is nothing restricting that intelligence from hitting the open market, being reported on and being analyzed as an example of what not to do. When this happens, and customers, partners and suppliers know exactly how potentially careless the company was, they risk a major hit to their image and their wallet, not to mention if the auditors find them in non-compliance of baseline protections of sensitive data.

In a litigious society, it is imperative that companies protect themselves. That said, it’s also important to remember to employ at least the baseline level of security protections — whether that is in accordance with PCI DSS standards or other requirements like BITS in the financial services industry.

Adhering to these and best practices models like the OWASP Top 10, along with ensuring you have legal representation, can drastically reduce the risk level in the event of a data breach.

by Tom Bain

<from http://www.computer.org/portal/web/computingnow/security/content?g=53319&type=article&urlTitle=data-breaches%3A-when-the-lawyers-get-involved&lf1=301437723f676016006449b8053249&gt;

Posted in Uncategorized | Leave a comment

Analog and digital MEMS microphone design considerations

Analog and digital MEMS microphone design considerations

April 01, 2013 | Jerad Lewis | 222904847
Analog and digital MEMS microphone design considerations Jerad Lewis examines the design considerations that need to be addressed when integrating analog and digital MEMS microphones into a system design.


Microphones are transducers that convert acoustic pressure waves to electrical signals. Sensors have become more integrated with other components in the audio signal chain, and MEMS technology is enabling microphones to be smaller and available with either analog or digital outputs.

 

Analog and digital microphone output signals obviously have different factors to consider in a design. I will examine the differences and design considerations to consider when integrating analog and digital MEMS microphones into a system design.

Inside a MEMS Microphone
The output of a MEMS microphone does not come directly from the MEMS transducer element. The transducer is essentially a variable capacitor with an extremely high output impedance in the gigaohm range.

Inside the microphone package, the transducer’s signal is sent to a preamplifier, whose first function is an impedance converter to bring the output impedance down to something more usable when the microphone is connected in an audio signal chain. The microphone’s output circuitry is also implemented in this preamp.

For an analog MEMS microphone, this circuit whose block diagram is shown in Figure 1 is basically an amplifier with a specific output impedance. In a digital MEMS microphone, that amplifier is integrated with an analog-to-digital converter (ADC) to provide a digital output in either a pulse density modulated (PDM) or I2S format.

 

Figure 1: Typical Analog MEMS Microphone Block Diagram

 

Figure 2 shows a block diagram of a PDM-output MEMS microphone and Figure 3 shows a typical I2S-output digital microphone. The I2S microphone contains all of the digital circuitry that a PDM microphone has, as well as a decimation filter and serial port.

 

Figure 2: Typical PDM MEMS Microphone Block Diagram

 

 

Figure 3: Typical I2S MEMS Microphone Block Diagram

 

A MEMS microphone package is unique among semiconductor devices, in that there is a hole in the package for the acoustic energy to reach the transducer element. Inside this package, the MEMS microphone transducer and the analog or digital ASIC are bonded together and mounted on a common laminate. A lid is then bonded over the laminate to enclose the transducer and ASIC. This laminate is basically a small PCB that’s used to route the signals from the ICs to the pins on the outside of the microphone package.

Figures 4 and 5 show the inside of analog and digital MEMS microphones, respectively. In these pictures you can see the transducer on the left and ASIC (under the epoxy) on the right side, both mounted on the laminate. The digital microphone has additional bond wires to connect the electrical signals from the ASIC to the laminate.

 

Figure 4: Transducer and ASIC of an analog MEMS microphone

 

 

Figure 5: Transducer and ASIC of a digital MEMS microphone

 

b>Analog Microphones

An analog MEMS microphone’s output impedance is typically a few hundred ohms. This is higher than the low output impedance that an op amp typically has, so you need to be aware of the impedance of the stage of the signal chain immediately following the microphone.

A low-impedance stage following the microphone will attenuate the signal level. For example, some codecs have a programmable gain amplifier (PGA) before the ADC. At high gain settings, the PGA’s input impedance may be only a couple of kilo ohms. A PGA with a 2 kΩ input impedance following a MEMS microphone with a 200 Ω output impedance will attenuate the signal level by almost 10%.

The output of an analog MEMS mic is usually biased at a dc voltage somewhere between ground and the supply voltage. This bias voltage is chosen so that the peaks of the highest amplitude output signals won’t be clipped by either the supply or ground voltage limits. The presence of this dc bias also means that the microphone will usually be ac-coupled to the following amplifier or converter ICs. The series capacitor needs to be selected so that the high-pass filter circuit that’s formed with the codec or amplifier’s input impedance doesn’t roll off the signal’s low frequencies above the microphone’s natural low-frequency roll-off.

For a microphone with a 100-Hz low-frequency -3-dB point and a codec or amplifier with a 10-kΩ input impedance (both common values), even a relatively-small 1.0-µF capacitor puts the high-pass filter corner at 16 Hz, well out of the range where it will affect the microphone’s response. Figure 6 shows an example of this sort of circuit, with an analog MEMS microphone connected to an op amp in a non-inverting configuration.

 

Figure 6: Analog microphone connection to non-inverting op amp circuit

 

Digital Microphones
Digital microphones move the analog-to-digital conversion function from the codec into the microphone, enabling an all-digital audio capture path from the microphone to the processor. Digital MEMS microphones are often used in applications where analog audio signals may be susceptible to interference.

For example, in a tablet computer, the microphone placement may not be near to the ADC, so the signals between these two points may be run across or near Wi-Fi, Bluetooth or cellular antennae. By making these connections digital, they are less prone to picking up this RF interference and producing noise or distortion in the audio signals. This improvement in pickup of unwanted system noise provides greater flexibility in microphone placement in the design.

Digital microphones are also useful in systems that would otherwise only need an analog audio interface to connect to an analog microphone. In a system that only needs audio capture and not playback, like a surveillance camera, a digital-output microphone eliminates the need for a separate codec or audio converter and the microphone can be connected directly to a digital processor.

Of course, good digital design practices must still be applied to a digital microphone’s clock and data signals. Small-value (20-100 Ω) source termination resistors are often useful to ensure good digital signal integrity across traces that are often at least a few inches long (Figure 7). For shorter trace lengths, or when running the digital microphone clocks at a lower rate, it is possible that the microphone’s pins can be directly connected to the codec or DSP, without the need for any passive components.

 

Figure 7: PDM microphone connection to codec with source termination

 

PDM is the most common digital microphone interface; this format allows two microphones to share a common clock and data line. The microphones are each configured to generate their output on a different edge of the clock signal. This keeps the outputs of the two microphones in sync with each other, so the designer can be sure that the data from each of the two channels is captured simultaneously.

At worst, the data captured from the two microphones will be separated in time by a half period of the clock signal. The frequency of this clock is typically about 3 MHz, which would lead to an intrachannel time difference of just 0.16 µs – well below the threshold that a listener will notice. This same synchronization can be extended to systems with more than two PDM microphones by simply ensuring that the microphones are all connected to the same clock source and the data signals are all being filtered and processed together. With analog microphones, this synchronization is left up to the ADC.

I2S interface

I2S has been a common digital interface for audio converters and processors for years, but it’s just recently being integrated into the devices at the edges of the signal chain, such as a microphone. An I2S microphone has the same system design benefits as a PDM microphone, but instead of outputting a high-sample rate PDM output, its digital data is output at a decimated baseband audio sample rate. With a PDM microphone, this decimation happens in the codec or DSP, but with an I2S microphone this processing is done directly in the microphone, which in some systems can eliminate the need for an ADC or codec entirely.

An I2S microphone can connect directly to a DSP or microcontroller for processing with this standard interface (Figure 8). Like with PDM microphones, two I2S mics can be connected to a common data line, although the I2S format uses two clock signals, a word clock and a bit clock, instead of the one for PDM.

 

Figure 8: Stereo I2S microphone connection to a DSP

 

When Size Matters
Generally, analog MEMS microphones are available in smaller packages than digital microphones. This is because an analog microphone package needs fewer pins (typically three, vs. five or more for a digital microphone) and the analog preamp has less circuitry than a digital preamp. This makes the analog preamp smaller than a digital preamp manufactured in the same fab geometry. Consequently, in the most space-constrained designs, such as in many small mobile devices, analog microphones are preferred in part because of their small size.

An analog microphone can be in a package with dimensions 2.5 × 3.35 × 0.88 mm or smaller, while PDM microphones often come in a 3 × 4 × 1 mm package, an increase of 62% in package volume. Figure 9 shows a comparison of three bottom port microphone packages. The smallest is the ADMP504, an analog microphone in the 2.5 × 3.35 × 0.88 mm, the middle-sized microphone is the ADMP521, a PDM microphone in the 3 × 4 × 1 mm package, and the microphone in the largest package is the ADMP441, an I2S microphone in a 3.76 × 4.72 × 1.0 mm package.

 

Figure 9: Comparison of microphone package sizes

 

This last microphone is in this larger package to support its nine pins. Despite its larger size, a microphone like this is comparable in functionality to an analog microphone and an ADC together, so the savings in PCB area if a converter is otherwise not needed outweighs the slightly-larger microphone footprint.

Analog and digital MEMS microphones both have advantages in different applications. Considering the system’s size and component placement constraints, electrical connections, and potential sources of noise and interference will lead to a well-informed decision on which type of microphone is best for your design.

About the author
Jerad Lewis is a MEMS microphone applications engineer at Analog Devices. He joined the company in 2001 after getting his BSEE from Penn State University. Since then, he’s supported different audio ICs, such as SigmaDSPs, converters, and MEMS microphones. He is currently also pursuing a M.Eng. in Acoustics at Penn State University.

<from http://www.analog-eetimes.com/en/analog-and-digital-mems-microphone-design-considerations.html?cmp_id=71&news_id=222904847&gt;

Posted in Uncategorized | Leave a comment

Overview of MEMS microphone technologies for consumer applications

by St.J. Dixon-Warren
Engineering and Process Analysis Manager, Chipworks

SinjinPhoto The extraordinary success of the iPhone 4 lies in the superb integration of multiple sensor technologies in conjunction with a very slick software interface.  In a recent series of MEMS Investor Journal articles, we reviewed the 9DoF motion sensing technology used in the iPhone 4. Here we will discuss the MEMS microphones that have been incorporated into the iPhone 4, and we will then provide a review of some other MEMS microphones seen by Chipworks in recent years.  MEMS microphone suppliers apparently saw a 50% increase in shipments in 2010, and they expect to see a four fold increase by 2014.

The iPhone 4 incorporates two microphones in the body of the device, and a third was found in the headset leads.  Knowles Technology earned the design win for the two primary audio sensing microphones, one in the handset and one in the headset lead.  An Infineon fabricated microphone, likely used for background pickup for noise cancellation, was found at the opposite end of the handset body.

An iPhone 4 teardown, showing photographs and x-rays of the microphone packages, and the MEMS and ASIC die, is available from Chipworks.  In this article, we will focus on the MEMS microphone die.  We will discuss the fabrication and operation of these clever little pieces of technology.

MEMS microphones are capacitive sensing devices.  In essence, they operate like a high frequency pressure sensor.  They feature a diaphragm which is comprised of two capacitor plates that, under the influence of the sound wave, vibrate with respect to each other.  This results in variation of the capacitance, that is then amplified by an associated ASIC to produce either an analog or digital output signal.  In the case of the Knowles microphones in the iPhone 4, Chipworks believes they are analog devices, although we have not yet done a circuit analysis to prove this.  We have not been able to identify the specific Knowles part numbers used.

The two Knowles primary audio sensing microphones contain a 1.1 mm2 MEMS die, with S4.10 die markings, shown in Figure 1 below.  Chipworks has seen the S4.10 die in many different downstream products.  We believe that this MEMS microphone die is used in the majority of Knowles’ current MEMS microphone product line up, with the distinction between the various different products being determined by the amplifier ASIC and the packaging.  The S4.10 die is very simple.  It features a MEMS diaphragm with two bond pad connections, one for the top plate and the other for the bottom plate.  The S4.10 likely uses the back side of the die to form a ground connection.

S1950_KNOWLES-S410_ROT_CAL
Figure 1: Knowles S4.10 MEMS microphone die from the iPhone 4.

The S4.10 die is actually quite similar to a Knowles microphone die, with S2.14 die markings, shown in Figure 2.  It was extracted from the SP0103BE3 product by Chipworks in 2006.  The major difference is the S2.14 is 1.6 mm2, corresponding to about twice the die area when compared to the S4.10.  The 50% shrink in the S4.10 die area will, of course, approximately double the yield of devices per wafer, thus resulting in a dramatic reduction in cost.  The MEMS microphone diaphragm has essentially the same ~0.5 mm diameter on both parts; however, the S2.14 has four bond pads, one for the bottom plate, two for the top plate, and a separate ground connection. Thus, not only was the S2.14 larger and, hence, more expensive to make, its packaging and integration costs would have been higher, since twice the wire bonding was required.

SP0103BE3-5_BLOG
Figure 2: Knowles S2.14 MEMS microphone die.

A detailed view of the edge of the S2.14 microphone diaphragm is presented in Figure 3.  The top plate is covered with an array of small holes, which are required to allow air to escape from the cavity between the two plates during operation. They were also needed in the release etch step in the fabrication process, discussed below.

109853_edge
Figure 3: Knowles S2.14 MEMS die detail.

Figure 4 shows a cross section through the S2.14 MEMS microphone die. The top and bottom capacitor plates are suspended above a sealed cavity, which is formed from the back side of the die using a wet etch which was selective for the {111} plane of the silicon.  The position of the plates has been somewhat distorted by the epoxy resin used by Chipworks to stabilize the sample for cross-sectioning.

MEM_die_5x
Figure 4: Knowles S2.14 MEMS die cross section.

A detailed view of the edge of the microphone diaphragm is shown in Figure 5.  The bottom plate is formed using a single layer of polysilicon (poly 1), while the top flexible diaphragm plate is comprised of a bilayer of silicon nitride and polysilicon (poly 2), perforated with the holes.

According to iSuppli, the Knowles MEMS microphone die are fabricated by Sony Semiconductor Kyushu Corp.  The fabrication of the two membranes, separated by an air gap, would have depended on the spacer layer, likely silicon dioxide, which would have been removed during the release step through the holes in the top plate, likely with anhydrous HF.  It is most probable that the back side cavity etch was performed before this final step, probably by using a KOH wet etch.

111943_node2_3_4
Figure 5: Knowles S2.14 MEMS die cross section detail.

The Infineon MEMS microphone die found in the iPhone 4 is similar in many respects to the Knowles device.  Figure 6 shows a photograph of the 1.35 mm x 1.25 mm Infineon E2002 MEMS microphone die.  The microphone diaphragm is ~1.0 mm in diameter.  The die features three bond pads, one for the top polysilicon plate, one for the bottom polysilicon plate, and a third which connects to a polysilicon guard ring.  The Infineon E2002 die found in the iPhone is identical to that seen by Chipworks during the analysis of the Infineon SMM310E6433XT integrated silicon microphone.

1014_E2002E_MEMS_die_cal
Figure 6: Infineon E2002 MEMS microphone die from the iPhone 4.

The SMM310E6433XT is no longer advertised on the Infineon web site; however, according to iSuppli, Infineon now supplies its MEMS die to three Asian microphone suppliers, AAC Acoustic Technologies Holdings Inc., BSE Co. Ltd., and Hosiden Corp.  These three suppliers, along with Knowles and Analog Devices, constitute the top five suppliers in the MEMS microphone market, with Knowles apparently holding nearly 80% of the market.

As an aside, it is worth noting that Analog Devices won the design win for the microphone in the fifth generation Apple iPod Nano (the new sixth generation Nano does not contain a microphone).  It is interesting that Analog was not able to win a socket in the iPhone 4.  Knowles and Analog Devices have recently concluded patent litigation, with the judge ruling in Analog Devices’ favor.

Figure 7 presents a photograph of the 1.0 mm x 1.0 mm Analog Devices MEMS microphone die found in the ADMP403, extracted from the iPod Nano (fifth generation).  Apparently, a major part of Analog’s strategy to avoid infringement on Knowles’ patents was to base the fabrication of this device on its well established iMEMS process.  The iMEMS process has been in use for many years for the production of inertial sensor products.

ADMP403_IP696_4-8H_MEMS_cal_rot
Figure 7: Analog Devices 4.8H MEMS microphone die from the iPod Nano.

MEMS microphones have evolved into a mature commodity product.  The market is highly competitive.  There are a number of other manufacturers, such as Akustica and MEMSTech, who continue to produce MEMS microphone products.  Akustica microphones are of particular interest to the MEMS community, since it was the first company to produce a CMOS-based MEMS device.  The MEMS diaphragm is formed using etch and release of the CMOS metallization layers.  Figure 8 shows a SEM micrograph of the Akustica AKU2000 microphone diaphragm, which is formed using a serpentine pattern in the CMOS “metal 1” layer.  A benefit of this approach is the ability to easily integrate signal processing circuitry onto the same die, thus allowing for single chip solutions.  Akustica was acquired by Robert Bosch GmbH in August 2009.

106_membrane_edge_new
Figure 8: Akustica AKU2000 microphone diaphragm.

Chipworks expects to see continued innovation in the MEMS microphone market.  The innovation is likely to occur in the signal processing ASIC and the packaging, rather than in the MEMS part of the product.  After all, Infineon has done rather well selling its original MEMS die to multiple microphone suppliers.  There may not be a “Moore’s Law” governing the size of MEMS microphone devices, since the physics of sound require that the microphone diaphragm be large enough to interact with pressure variation induced by the sound waves.  At 10 kHz, the wavelength is 34 mm, which is already much larger than the typical 0.5 mm diameter being used in commercial MEMS microphones.  The diaphragm needs to be large enough, for a given plate thickness, such that the pressure variations from the sound waves for a human voice induce sufficient displacement of the capacitor plates to give a suitable signal for the ASIC amplifier.  We can expect circuit designers to continue to make innovations in the design of the ASIC, thus creating lots of new work for the reverse engineer.  Designers will likely move towards the integration of an ADC and digital signal processing within the microphone amplifier ASIC.

*********************************************

St. J. (Sinjin) Dixon-Warren manages the Process Analysis group in the Technical Intelligence business unit at Chipworks.  His group provides technical competitive analysis services to the semiconductor industry, currently with a special focus on the analysis of MEMS, CMOS images sensor, advanced CMOS and advanced power devices.  He is the Sector Analyst for MEMS analysis at Chipworks.  Dr. Dixon-Warren holds a PhD in physical chemistry from the University of Toronto and a BSc in chemistry from Simon Fraser University.  Dixon-Warren joined Chipworks, in 2004, as a member of the process analysis group.  He is author of about 50 publications and of about 100 Chipworks reports.  Dr. Dixon-Warren can be reached at sdixonwarren@chipworks.com.

<from http://www.memsjournal.com/2011/03/overview-of-mems-microphone-technologies-for-consumer-applications.html&gt;

Posted in Uncategorized | Leave a comment

Green Tea Press

Welcome to Green Tea Press, publisher of How to Think Like a Computer Scientist, The Little Book of Semaphores, and more.

<http://www.greenteapress.com/&gt;

Posted in Uncategorized | Leave a comment

change names of multiple files script

for file in */*/*
do
    ofile=`echo $file | sed -e ‘s/abcd/acbd/g’`
    mv $file $ofile
done

Posted in Uncategorized | Leave a comment