The Significance of Critiques & Feedback Loops

A combination of data and intuition informs all decisions

Senior Practitioners bring a wealth of experience that can be measured by the quality of their portfolios, brands on their resumes, and years of experience. There is, of course, additional value that they provide. It's not easily measurable in this case, but it makes a big difference.

They bring not only creativity but also deep knowledge to their craft. Among their essential tools is the Critique—which helps them refine their work and the work of others. This article will explore how Senior Designers and Researchers use Critiques and Feedback Loops to refine their outcomes and contribute to their peers' work quality.

Definitions

I use Feedback Loops as a shorthand for how Designers and Researchers take input from Buyers or Users and convert it into actionable information. Feedback Loops are great at collecting objective data about current pain points and what is or is not compelling about a solution.

For this post, the definition of a Critique is an exchange of thoughts and opinions from Practitioners on a body of creative work. A Critique doesn't necessitate more than two individuals but can be of higher value in a group setting. Most Designers are familiar with Critiques, but that doesn't mean they have experience with a well-run and constructive review of creative solutions.

Scaling Actionable Feedback

Multilevel Value

Our first idea is rarely if ever, the best. Design is creative problem-solving. Product Development is an iterative process that fine-tunes the user experience and ensures solutions address business goals and buyer or user pain points. Critiques benefit Design in the context of Product Development for many reasons. 

Design is a team sport. Churn is an ever-present concern in an organization if it has one Designer who operates without peer feedback. Even the best, most experienced Designers benefit from the input of others. There are typically two to three intelligent ways to address user pain points and many less-than-ideal ways. 

Senior Practitioners are capable of delivering independent of peer feedback. Their outcomes will be solid but likely get stale and easily predictable over time. Or the Designers or Researchers will inevitably get bored with the space and move on. Innovation will suffer. Managers should provide forums for sharing work and exchanging ideas. 

Constructive Criticism

A good Critique will point out a solution's good and bad attributes. An excellent Critique will provide direction or potential areas of inspiration. An awful Critique involves personal comments or a one-sided conversation dominated by a voice or very few voices. 

I have been in good, excellent, and awful Critiques. There are takeaways to each. Good Critiques are still better than not discussing work in progress--bear in mind that this is a value exchange, and often, the value is being privy to the conversation even if you aren't an active participant. An excellent Critique leaves Practitioners feeling like they all learned something and that the work in question will improve along with the skills of the Designers iterating on the concept.  

Awful Critiques are failures of Practitioners and managers. A lousy Critique leaves Designers and Researchers emotionally vulnerable. Questioning one's worth and feeling like a poor culture fit is a bad outcome that comes with a poorly run Critique. Designers should freshen up their resumes and get ready to leave jobs if harsh Critiques are a method for signaling poor performance and/or are used as a reason to exit staff. 

Senior staff should attend Critiques to provide clear and actionable feedback. Senior Practitioners should never make personal comments and avoid crowding out other voices. It's okay to disagree in group settings, but having a personal agenda when discussing work is some petty petty bullshit. Managers should avoid negative scenarios and correct them openly and transparently. 

Strength Spotting

Senior designers have a keen eye for design details. They use Critiques to share, evaluate, and inspire via a shared dialog. This internal review ensures that every element aligns with the design vision, usability, and aesthetics. 

Senior Designers and Researchers are part of the hiring process for a reason. Maintaining or improving the quality of outcomes enables them to scale their expertise. Critiques are great opportunities for senior staff to see who excels at what part of their job in a way that goes deeper than a portfolio can reflect. 

I have seen many Senior Practitioners identify the talents of junior staff members during Critiques. By defending their work and the rationale that drove it, a junior member of a design team can make a lasting impression--both good and bad. Managers should see Critiques as opportunities to evaluate the team's soft and hard skills. 

Mind your gaps 

No one is excellent at ALL of the responsibilities of a modern Designer or Researcher. We all have room to grow, but it's only possible to recognize it with an outside perspective. A Critique helps us see our unmet potential through the lens of the work others contribute. 

Few things are better than a Critique that results in a follow-up meeting between peers to elaborate on trouble spots. I have seen many Designers and Researchers internalize feedback and return to the following Critique referencing specific areas that have been refined based on peer input. 

Senior staff should attend Critiques to spot areas of improvement and see them as opportunities to mentor or provide guidance. They should also seek out criticism without ego. Managers should watch for repeated patterns to identify hard and soft skills needing improvement. 

Looped Input

Discover & Design, Rinse & Repeat

Feedback loops are iterative processes where Designers gather, analyze, and apply learnings. Feedback Loops form the foundation of continuous improvement (ahem, Agile methodology), and Senior staff are skilled at orchestrating this process effectively. A Feedback Loop is different than running research because part of the loop is making changes to experiences based on learnings. 

Bye Bye Bias 

Technology moves quickly. Software is updated continuously, shifting perceptions of what quality experiences are. The pace of change should result in those developing software questioning their assumptions with a healthy intellectual curiosity. Senior Practitioners are curious individuals who can recognize bias.

A Feedback Loop is excellent at eliminating bias. If we were aware of our biases, we wouldn't have to work so hard to dispose of them. 

People who live and breathe the problem space and/or deliver value using a tool or service aren't monolithic. They have unique needs, behaviors, and goals. That's the best part of getting their input. The challenge is identifying what is useful vs. interesting to hear. Senior Researchers know the difference. 

Effective Managers work with Senior Practitioners to constrain the scope, communicate findings, and drive outcomes from research without bias. I have seen biases in Development and Product Management. I have seen just as many, if not more, from Designers. Senior Researchers are great at being objective, which is central to their methodology.

Dated Data

Curiosity should overtake Conventional Wisdom if an organization wants to ensure it doesn't have blind spots. Asking 'Why' all the time isn't going to make you many friends, but it is often a valid question when faced with assumptions met with shrugged shoulders. Data may have been collected using the wrong methods and/or is outdated. 

Outdated data can result in honest mistakes and wasted effort. It's best to do a quick survey of users to validate or invalidate a direction than to start driving an experience that may not justify its ROI. Or worse, do damage by driving outcomes that the market doesn't value instead of working on improving an existing experience. 

I have seen many data points of one, and the recency of client conversations drive churn that derails 'no-brainer' feature delivery. I have seen this pattern play out countless times over the last 20 years. The best way to combat this scenario is to collect data on an ongoing basis. I'm not saying that executive intuition or a key customer is always wrong (or right), but validating a hypothesis can be quick and easy. Shipping the wrong thing at the wrong time is simply bad for business. 

Managers should support Senior staff to collect data by working with Product Management, Sales, and Marketing to engage with customers and users as often as possible. Senior Practitioners should always have a list of questions and topics at the ready that will impact outcomes. Managers should work with Senior staff to deliver toplines and executive summaries quickly to meet the business's needs. 

Mixed Methods

Many intelligent people have strong, informed points of view around Qualitative and Quantitative methods and User Research. Several dozen (at least) good books are available on the topic. There is a time for either approach to collecting data about users and customers. It depends on the question you are looking to answer, the problem you are looking to solve, the speed at which you need an answer, and who you need to talk to. 

Business, Marketing, and Development audiences respond best to quantitative analyses and are most familiar with survey methods. There is safety in more significant numbers. The issue with a larger sample size include, but are not limited to a) cost, b) time, c) quality. It isn't easy to reliably scale input. Certain types of inquiry lend themselves to quantitative analysis, but usability testing isn't one of them. 

Not all design-related questions require qualitative methods. Design values more than usability testing, and in all fairness, not every design decision requires user data--there are times when risk is acknowledged, and trust is needed. If a company wants to make a statement, getting data proving it is worth taking isn't realistic.

I have facilitated studies that were simply judgment calls from leadership that lacked fortitude. I have had to commission avoidable studies to allay executive misgivings. Running a survey or a series of 8-person one-on-one interviews to make people feel better about a decision will be around for a while. That's ok--we learn from each project we deliver. 

Managers should work with Senior staff to identify the correct methods to address the questions. Senior staff should take the opportunity to educate non-design team members on why the techniques are practical and how they will lead to confident results. 

Mutual Inclusivity

It’s not one or the other

Critiques are great for the design team. So are Feedback Loops. They complement one another, driving iterations and meaningful revisions to user experiences. A Critique is not a replacement for a Feedback Loop and vice-versa. There is no 'right way' or set order to leverage each process.

Objective data from a Feedback Loop leads to improvements to a design, then Critiqued by the design team. Subjective input from the design team leads to questions that users best answer. The ordering here depends on the problem the team is trying to solve, the importance of the solution to the business, the time the team has to solve it, etc.

The How

Managers should be strong supporters of both Critiques and Feedback Loops. Critiques are easier to enable since they are internal to the Design Team. Often, a senior team member will volunteer to host, which can help reduce the stress of presenting work in progress to leadership. Feedback Loops will require Management to do more heavy lifting since research participants may be readily available.  

Senior staff should foster a culture of collaboration and learning, offer constructive feedback, share insights, and help Junior Designers develop their skills. Senior Designers and Researchers are perpetual learners who benefit from exposure to industry trends as they drive best practices.

In the final installment of this series I will address Autonomy and Trust, which is essential for Senior staff and Management.

Previous
Previous

Freedom + Trust = Scaled Impact

Next
Next

Education, Enablement, and Evangelism