How to Critique Research Methods

This is the third creative challenge in Research Methods in Professional and Technical Communication. This challenge introduces students to methodological flaws associated with the studies conducted by scholars/theorists, designers/creatives, and empiricists (i.e., qualitative, quantitative, and mixed-methods researchers). The article below summarizes common problems with research methods. For instance, it introduces ethical concerns, including the impact of AI systems on inquiry. It analyzes common problems with scholarly, design, creative, quantitative, qualitative, and mixed-methods. It contextualizes the need for critique by introducing Samuel Arbesman's work on the "The Half-life of Facts." Working collaboratively in groups of three, students continue working on their research notes they began for the second creative challenge. This time, rather than focusing on the ways researchers ask questions, present literature reviews, and engage in citation, they analyze, critique, and reflect on the methodologies that PTC researchers employ in a disciplinary journal. Evaluation criteria include methodological appropriateness given the audience and research questions, ethical considerations, and alignment with the research conventions and epistemological positions of scholars, creatives, or empiricists. Subsequently, the groups will prepare a presentation for their peers that reports on their findings. The goal here is to identify the epistemological assumptions, ethical practices, and scholarly conversations that inform scholarship and research in PTC -- at least as reflected in the journals analyzed by the student groups. Additionally, working individually, students prepare a reflection that reports on their use of AI to conduct the research notes and group presentation. They also identify, based on the group work and presentations, what study they would find most likely to conduct.

While a presenter points to his presentation on a screen, a member of the audience raises a sign that says "Your results aren't generalizable!"

Table of Contents

Table of Contents

The Creative Challenge

For this project, you will collaborate with the same teams from the second challenge to critique research methods from a disciplinary and ethical perspective. Building on your previous analysis, you will focus on the same nine studies from the first challenge, where you analyzed research questions, literature reviews, and citations from a PTC journal. This time, you will address ethics and methodological critique.

Deliverables

This creative challenge calls for 3 deliverables:

Deliverable #1 – Research Notes

Working collaboratively in groups of three, prepare research notes for nine articles that were published in PTC journals (see list below). Each group member should choose, if possible, studies that use at least three different research methods:

  1. Scholarly and Theoretical Research (aka Textual Research)
  2. Design and Creative Methods
  3. Empirical Methods
    • Qualitative Methods
    • Quantitative Methods
    • Mixed Methods

If, for whatever reason, you wish to analyze different articles, that’s fine; however, if you choose this option, you should provide a succinct analysis of the new study’s research question, literature review, and citation.

Please follow this template for your notes:

  • Your Name
  • APA 7 citation for the article being reviewed
  • Authority Score
    • Based on the textual, rhetorical, and citation analysis you conducted for the second creative challenge and your methodological critique for this challenge, grade the study’s authority on a four-star system, with one star being ascribed to a terrible study and four stars to an excellent study
  • Data Visualization
    • Create a table or some other sort of visualization that highlights the authority score you have given each study.
  • Identification of the primary research method the study employed and the Methodological Community the research targets as its primary audience. This can be a couple of words on one sentence
  • Total word count of new content

    [ Line Break ]
  • Paste in your analysis of the study’s research question, literature review, and citation (and required tables)
    [ Line Break ]
  • Summarize the Research Methods (100 to 200 words)
    • Identify the investigator’s methodological moves that led you to define the study as representative of a particular methodological community. In the case of mixed-methods studies, explain the different types of methods used. Provide the specifics the reader needs to know to understand how data was gathered and interpreted. If the researchers cited other sources to explain and substantiate their methods, be sure to point that out (and consider that when evaluating the study’s authority score).
  • Critique the Research Methods from ethical and PTC disciplinary perspectives (400 to 500 words). Consult the article below — “Introduction to Ethical and Methodological Critique” — to identify relevant critical lenses for analyzing the research studies. Consider aspects such as methodological appropriateness, ethical considerations, and alignment with the research conventions and epistemological positions of scholars, creatives, or empiricists.

    Evaluation Criteria:
    • Depth of understanding of research methods in PTC
    • Quality of methodological critique
    • Clarity and coherence of analysis and argumentation
    • Demonstration of critical thinking skills
    • Proper use of professional writing conventions and citation practices
    • Remember to focus on the methodological aspects rather than critiquing the content or findings of the studies. Your analysis should demonstrate your understanding of the research methods discussed in our course and their application in real-world PTC research.

Deliverable #2 – Presentation on a PTC Journal from a Disciplinary Perspective

Working collaboratively, develop and present a five minute-long report to your peers.

  1. Identify the journal your group analyzed. Share what your group inferred about the status of scholarly conversations in the PTC discipline — at least as represented by the journal and nine studies you analyzed. From the perspective of the journal you analyzed, do you think the journal’s audience reflects a mixture of methodological communities? Or, would you say the readers primarily use one method, such as scholarly methods?
  2. What was the best article your group reviewed? What made it so interesting to your group?
Format

The first slide of your presentation should list group members’ names. Your team may prepare a slide or two that synthesizes the nine notes. However, the focus of this presentation is a bit more abstract than a mere summary of research notes. Instead, the goal is to share your groups’ insights about important scholarly conversations and research methods in the PTC community — as reflected by the one professional journal you analyzed.

As you look across all nine studies, what patterns can you identify regarding the scholarly conversations that interest PTC community? From the nine studies your group critiqued, would you say the discipline — at least reflected in the journal you studied — favors scholarship, theory, quantitive analysis, qualitative analysis?

Looking beyond the nine articles you analyzed in the journal, flipping through the table of contents of past editions over the past few years, what is your sense of the epistemologies that inform research and scholarship in the PTC field? As you analyzed these ongoing conversations in the field, did you generate any new ideas for research you or members of your group would like to conduct? If you were to replicate a study or create a new one, what would be your research question and research methods?

Deliverable #3Reflection

Working individually, write a 200 to 250 word reflection that reports on

  1. Based on the articles your group critiqued, what one study would your like to replicate? Or, if you were to develop a new study, what would research question be?
  2. what AI tools did you use, and how did you use those tools?

Why Does this Creative Challenge Matter?

By engaging methodological critique of these studies — and by learning about your peers’ critiques — you will enhance your critical-literacy competencies and gain a broader understanding of contemporary knowledge-making practices in professional and technical communication. Being a critical consumer of research is needed now the internet given the exponential growth in the total number of journal and book publishers, which was made possible by inexpensive digital tools and the internet.

Step 1 – Choose the Journal & Nine Articles to Review

Word cloud composed by aggregating titles of journals in in PTC

Working in groups of three, identify a journal to analyze from the following list of research journals in professional and technical communication, which was compiled and annotated by Professor Jason Tham, an associate professor of technical communication and rhetoric at Texas Tech. Please state your claim to a particular journal by noting it on the course sandbox. Ideally, to enhance learning across the class, groups will choose different journals.

Each group member should choose studies that use at least three different research methods: Ideally, the group will choose journals that reflect the practices of more than three different methodological communities.

Note: to log on to many of these journals, if they are locked behind a paywall, you may need to log on to the University’s Library Services portal.

Journal of Business and Technical Communication (SAGE)

  • Nature: Theory driven; seems to balance qualitative and quantitative research
  • Focus: Technical and business communication practices and pedagogy; discussions about training students to be professionals; some useful teaching strategies and cases
  • Notes: Currently one of the top journals in technical communication; arguably most cited; has a strong tie to Iowa State’s professional communication program

Journal of Technical Writing and Communication (SAGE)

  • Nature: Slightly less theoretical than JBTC and TCQ but still heavy academic-speak
  • Focus: Trends and approaches in technical communication practices and research
  • Notes: One of the oldest technical communication journals in the US

Technical Communication (Society for Technical Communication)

  • Nature: Arguably more practical than JTWC, JBTC, TCQ, and IEEE Transactions; caters to STC’s professional audience… and it’s associated with the STC’s annual summit
  • Focus: Emerging topics, methods, and practices in technical communication; content management, information architecture, and usability research
  • Notes: It’s behind a paywall some university libraries may not even access; there is an online version of the journal called Technical Communication Online… but it’s not as prominent as the print journal; seems to have a strong association with Texas Tech’s technical communication program

Technical Communication Quarterly (Association for Teachers of Technical Writing) (Taylor & Francis)

  • Nature: Theoretical + pedagogical
  • Focus: Teaching methods and exemplary approaches to research; features many exemplary qualitative research cases
  • Notes: Another top journal in technical communication; produces many award-winning pieces; associated with ATTW so it has a huge academic following… especially those who also attend the annual Conference on College Composition and Communication (CCCC)

IEEE Transactions on Professional Communication (Institute of Electrical & Electronics Engineers – Professional Communication Society)

  • Nature: 50-50 theory and practice
  • Focus: Engineering communication as professional communication; empirical research
  • Notes: Another old journal that has a lot of history; seems to have a strong tie to the University of North Texas’s technical communication department

IEEE Transactions on Technology and Society (IEEE Society on Social Implications of Technology)

  • Nature: 30% technical, 70% philosophical discussions about social technologies
  • Focus: Computers science, CS education, technical design, social computing
  • Notes: Good for interdisciplinary work, digital humanities, and digital education

Communication Design Quarterly (Association for Computing Machinery – Special Interest Group on Design of Communication)

  • Nature: Theoretical, methodological
  • Focus: Offers many accessible (comprehensible) research reports on design methods, research practices, teaching approaches, and industry trends
  • Notes: Open access…yay! Recently pursued an “online first” model where articles are published on a rolling basis; it’s considered the second-tier journal in the academic circle but it’s surely becoming more popular among technical communication scholars

Journal of Usability Studies (User Experience Professionals Association)

  • Nature: For academics, this is highly practical
  • Focus: Empirical research; mostly quantitative
  • Notes: Independent journal not associated with an academic institution

Behaviour and Information Technology (Taylor & Francis)

  • Nature: Computer science emphasis… so, experimental + theoretical
  • Focus: Human-computer interaction; information design, behavioral science
  • Notes: This is a UK journal… provides a nice juxtaposition to US journals and perspectives

Human Factors: The Journal of the Human Factors and Ergonomics Society (SAGE)

  • Nature: Similar to BIT, experimental and theoretical
  • Focus: Puts emphasis on the human factors and ergonomics discipline; draws from psychology
  • Notes: As shown in its name… it’s a journal for the Human Factors and Ergonomics Society

Ergonomics in Design: The Quarterly of Human Factors Applications (SAGE)

  • Nature: Slightly more theoretical than Human Factors
  • Focus: Theoretical discussions, experiments, and demonstrations
  • Notes: Also an HFES journal

International Journal of Human-Computer Studies (Elsevier)

  • Nature: Theoretical
  • Focus: More interdisciplinary than EID and Human Factors
  • Notes: May be one that technical communication researchers feel more comfortable publishing in even if they are not working directly in HCI or computer science fields

Human Technology (Independent journal)

  • Nature: Theoretical, philosophical
  • Focus: Discusses technological futures and human-computer interaction
  • Notes: It’s got less prestige compared to EID and Human Factors

Human Communication & Technology (Independent journal)

  • Nature: Theoretical, empirical
  • Focus: Communication studies and social technologies
  • Notes: It’s fairly new and doesn’t seem to publish multiple issues a year

Journal of Computer-Mediated Communication (International Communication Association) (Oxford)

  • Nature: Empirical; qualitative; quantitative
  • Focus: Social scientific approach to computer-based communication; media studies and politics; social media research
  • Notes: Top journal for solid communication technologies research

International Journal of Sociotechnology and Knowledge Development (IGI Global)

  • Nature: Empirical; qualitative; quantitative; practical
  • Focus: Social scientific approach to technology studies and professional communication; seems catered to practitioner audience
  • Notes: Has an interdisciplinary feel to it; one or two special issues are of specific interest to technical communication design

Business and Professional Communication Quarterly (SAGE)

  • Nature: Theoretical, pedagogical
  • Focus: Workplace communication studies and teaching cases
  • Notes: A journal of the Association for Business Communication (ABC); top tiered for business writing and communication research

International Journal of Business Communication (SAGE)

  • Nature: Practical, pedagogical, experimental
  • Focus: Similar focus to BPCQ
  • Notes: Also an ABC journal (I am not sure why there is this other journal)

Programmatic Perspectives (Council for Programs in Technical and Scientific Communication)

  • Nature: Programmatic, pedagogical
  • Focus: Program and curriculum design; teaching issues; professional development of teachers
  • Notes: Smaller journal… not sure how big is the readership but it’s got a good reputation

Xchanges: An Interdisciplinary Journal of Technical Communication, Rhetoric, and Writing across the Curriculum (Independent journal)

  • Nature: Pedagogical, beginner research, experimental, teaching cases
  • Focus: Technical communication, writing studies, rhet/comp, and everything in between!
  • Notes: Open access journal with pretty good editorial support; provides mentorship to undergrad and graduate writing; multimedia friendly

RhetTech Undergraduate Journal (Independent journal) 

  • Nature: Beginner research, undergraduate research
  • Focus: Writing studies, rhet/comp, technical communication

Notes: Open access; print based (PDF) so not very multimedia-friendly

Step 2 — Engage in Methodological Critique

To prepare you for this challenge, the article below summarizes the most common weaknesses of the methods and knowledge claims made by investigators who are seeking to contribute to the scholarly conversations of prominent methodological communities.

The illustration is a word cloud created by all of the words in the article
Across academic and professional fields investigators use a multitude of research methods In order to produce credible authoritative research results investigators need to use the methods their audience expects them to use Otherwise their work will be dismissed as worthless not worth reading

Step-by-Step Guide to Methodological Critique

Why Are Researchers So Critical of Other Researchers’ Works?

Throughout history, human knowledge has constantly evolved, but the pace of this evolution has accelerated dramatically. In the past, scientific findings often held sway for centuries. Consider, for example, the Ptolemaic model of the universe, which claimed that the Earth was the center of the universe with all celestial bodies revolving around it, persisted for over 1400 years.

In the last few decades the overall pace of scholarship and research has increased dramatically, with new studies and discoveries being published at a much faster rate. This rapid accumulation of new knowledge can quickly make previous findings obsolete. A study highlighted by Samuel Arbesman in his book The Half-life of Facts found that the average time for a scientific finding to be refuted or significantly modified has decreased from 45 years in the 1960s to just 5 years in the 2010s (Arbesman, 2012). Consider this example from Arbesman:

A few years ago a team of scientists at a hospital in Paris decided to actually measure this (churning of knowledge). They decided to look at fields that they specialized in: cirrhosis and hepatitis, two areas that focus on liver diseases. They took nearly five hundred articles in these fields from more than fifty years and gave them to a battery of experts to examine.

Each expert was charged with saying whether the paper was factual, out-of-date, or disproved, according to more recent findings. Through doing this they were able to create a simple chart (see below) that showed the amount of factual content that had persisted over the previous decades. They found something striking: a clear decay in the number of papers that were still valid.

Furthermore, they got a clear measurement of the half-life of facts in these fields by looking at where the curve crosses 50 percent on this chart: 45 years. Essentially, information is like radioactive material: Medical knowledge about cirrhosis or hepatitis takes about forty-five years for half of it to be disproven or become out-of-date.

Samuel Arbesman, The Half-life of Facts

In an interview with the Economist, Arbesman wrote

I want to show people how knowledge changes. But at the same time I want to say, now that you know how knowledge changes, you have to be on guard, so you are not shocked when your children (are) coming home to tell you that dinosaurs have feathers. You have to look things up more often and recognise that most of the stuff you learned when you were younger is not at the cutting edge. We are coming a lot closer to a true understanding of the world; we know a lot more about the universe than we did even just a few decades ago. It is not the case that just because knowledge is constantly being overturned we do not know anything. But too often, we fail to acknowledge change.

Some fields are starting to recognise this. Medicine, for example, has got really good at encouraging its practitioners to stay current. A lot of medical students are taught that everything they learn is going to be obsolete soon after they graduate. There is even a website called “up to date” that constantly updates medical textbooks. In that sense we could all stand to learn from medicine; we constantly have to make an effort to explore the world anew—even if that means just looking at Wikipedia more often. And I am not just talking about dinosaurs and outer space. You see this same phenomenon with knowledge about nutrition or childcare—the stuff that has to do with how we live our lives.

(RDA 11/2012)

Misinformation presents another significant challenge to assessing the authority of published research studies. The rapid spread of false or misleading information, amplified by digital platforms and predatory journals, has made it increasingly difficult for researchers and the public alike to distinguish between credible and unreliable sources. This phenomenon affects all fields, from professional and technical communication to cutting-edge scientific research.

According to the STM Report, there were over 33,100 active scholarly peer-reviewed English-language journals in 2018, publishing approximately 3 million articles annually (Johnson, Watkinson, & Mabe, 2018). This represents a significant increase from just 23,000 journals in 2011. This proliferation of outlets for scholarly work, while providing more opportunities for researchers to share their findings, also presents challenges. Researchers must now navigate a complex landscape of publications, distinguishing between rigorous, ethical studies and those that fall short of scholarly standards.

The rise of predatory journals, which prioritize profit over academic integrity, further complicates this landscape. A 2015 study estimated that there were over 8,000 predatory journals actively publishing, underscoring the importance of developing strong critical evaluation skills (Shen & Björk, 2015).

Moving forward, thanks in part to the rise of AI systems, the number of published research studies has gone parabolic. Perhaps as early as 2029, once AI systems reach superintelligence, they will be producing knowledge at a pace far exceeding human researchers.

Why is it Important to Critique Methods from an Ethical Lens?

scientist pours a chemical solution on a colleague - conflict resolution
Conflicts sometimes arise when people interpret events differently Photo Credit Two Scientists Taking a Break by Morel is licensed CC BY 40

Ethics are another important concern for researchers and consumers of research studies. The integrity of research is not solely dependent on methodological rigor but also on ethical conduct. Historical examples have served as cautionary tales about the potential for harm in research: T

  • The Tuskegee Syphilis Study, conducted between 1932 and 1972, involved researchers withholding treatment from African American men with syphilis to study the disease’s progression, even after effective treatments became available.
  • The Stanford Prison Experiment in 1971, led by psychologist Philip Zimbardo, aimed to study the psychological effects of perceived power. In this experiment, college students were randomly assigned roles as prisoners or guards in a mock prison. The study quickly spiraled out of control as “guards” became increasingly abusive, and “prisoners” showed signs of extreme stress and breakdown. The experiment, originally planned for two weeks, was terminated after just six days due to the psychological harm being inflicted on participants.

In response to these and other ethical problems, researchers in the U.S. and Europe established Institutional Review Boards (IRBs) and ethics committees to oversee research involving human subjects. These bodies are designed to protect participants and ensure that research adheres to ethical standards. However, ethical concerns continue to be a problem in contemporary research. Here are some examples of recent ethical lapses and concerns in various domains:

  1. Social media research: In 2014, Facebook conducted an “emotional contagion” study where they manipulated users’ news feeds without proper informed consent. This sparked outrage about privacy violations and ethical research practices in social media.
  2. Climate change studies: In 2015, it was revealed that Dr. Wei-Hock Soon and Sallie Baliunas, astrophysicists at the Harvard-Smithsonian Center for Astrophysics, had received $274,00 from the American Petroleum Institute and $335,000 from Exon Mobil without disclosing this funding in most of his scientific papers. This raised concerns about conflicts of interest and the potential for industry funding to bias research outcomes.
  3. Nutrition studies: In 2016, it came to light that the sugar industry had funded research in the 1960s that downplayed the link between sugar consumption and heart disease, instead shifting blame to fat. This historical example continues to influence contemporary debates about industry funding in nutrition research.
  4. Vaping studies: In 2019, concerns were raised about potential conflicts of interest in e-cigarette research when it was revealed that several scientists conducting studies on the health effects of vaping had received funding from e-cigarette companies or anti-vaping groups, potentially biasing their findings.

A fifth confounding factor when it comes to assessing the authority of research is the tendency for investigators to overgeneralize what their results mean.

Consider, e.g., J.D. Vance’s “Hillbilly Elegy.” This work has been categorized as part memoir and part authoethnography, which is a qualitative research method in which the researcher uses their own personal experiences and cultural context to analyze and understand broader social phenomena. In an autoethnographic work, the author’s subjective perspective and lived experiences are central to the analysis and interpretation.

In the case of “Hillbilly Elegy,” Vance draws heavily on his own upbringing and family history as a means of exploring the cultural and socioeconomic dynamics of the working-class white communities of Appalachia, which he refers to as the “hillbilly” culture. Vance portrays “hillbilly” culture as white trash, living on government subsidies. This approach raises several epistemological issues:

  1. Overgeneralization: Vance extrapolates his individual experiences and family history to make sweeping claims about Appalachian culture. This violates the epistemological principle that qualitative, personal narratives are context-dependent and not necessarily generalizable.
  2. Limited perspective: As a single voice, Vance’s account fails to capture the diversity of experiences within Appalachia. The Appalachian region spans 13 states, covers 205,000 square miles, and is home to more than 25 million people across 420 counties. Vance’s narrative, based primarily on his experiences in one part of Ohio, cannot adequately represent this vast and diverse area.
  3. Stereotyping: Vance’s portrayal of Appalachian residents as “white trash” dependent on government subsidies and prone to drug addiction reinforces negative stereotypes rather than providing a nuanced understanding of the region’s diverse population. This characterization oversimplifies complex socioeconomic issues and perpetuates harmful misconceptions about Appalachian communities.
  4. Misrepresentation of expertise: By presenting his personal story as representative of an entire region, Vance assumes a level of authority that is not justified by his methodology or breadth of experience. His individual account cannot adequately represent the varied experiences across the vast Appalachian region.
  5. Neglect of historical and structural factors: The book’s focus on personal responsibility and cultural explanations for poverty overlooks broader historical, economic, and political factors that have shaped Appalachia’s challenges.

Why is it Important to Critique Research from an Epistemological Lens?

As discussed in the first two creative challenges, it is essential to consider the epistemological assumptions that underlie different research methodologies. Epistemology, the theory of knowledge, matters because it influences how research is conducted and interpreted. For example, one cannot engage in scholarly analysis of 19th-century novels to assess the weather in those books for climate-change research. Each methodological community has distinct approaches and standards for engaging in research, and their methods all come with limitations:

  • Scholars: Knowledge is constructed through ongoing debates and arguments, with no ultimate resolution, reflecting hermeneutic principles. Thus, scholarship cannot produce absolute facts nor can it generalize from the particular to the global. By itself, scholarship cannot resolve a hypothesis or cause-and-effect relationship.
  • Quantitative Empiricists: Scientists assume knowledge is created through the identification of cause-and-effect relationships that can be measured and generalized. This reflects a positivist view that the world is orderly and knowable through objective, statistical methods. However, quantitative methods — i.e., the scientific method — can only indicate likelihoods and probabilities not absolute truths.
  • Qualitative Empiricists: For qualitative researchers, knowledge is context-dependent and emerges from in-depth, subjective analysis, with a faith in stories and narratives. This post-positivist approach acknowledges that understanding is shaped by the experiences and perspectives of both the researcher and participants, emphasizing that findings are not generalizable but provide deep insights into specific contexts.
  • Designers and Creatives: This methodological community rests its methods — which typically are mixed methods — on creative thinking, interviewing, observation (e.g., autoethnography, venture design, or usability). For creatives, knowledge isn’t the point. Instead, it’s creating a narrative, service, or application that solves a problem. The method doesn’t try to solve universal truths. Rather, it attempts to develop practical, user-centered solutions that are iteratively tested and refined. This approach focuses on creating innovative designs that meet user needs and solve real-world problems. Ultimately, this method cannot produce generalizable truths. It can, instead, assess usability and the quality of design based on user feeedback.
  • Mixed Methods: This community is composed of scholars, quantitative empirical researchers, qualitative empirical researchers, and creatives. This community mixes methods and epistemological positions, often producing research that has specific sections for specific types of readers — different methodological communities. That said, some methodologists believe mixed-methods work is inherently flawed, believing in methodological purity.

Given the exponential growth of published research, the emergence of predatory journals, the potential for conflicts of interest, and the prevalence of misinformation, researchers approach their peers’ work with a critical eye, regardless of the methodology employed. Both creators and consumers of research must rigorously evaluate the currency, relevance, authority, accuracy, and purpose of the works they cite or rely upon. This critical evaluation is essential to avoid building studies on a shaky foundation. For example, the cold fusion announcement in 1989 claimed to achieve nuclear fusion at room temperature, initially sparking excitement. However, it was soon met with skepticism and ultimately discredited due to the inability to replicate the results. While replication is specific to scientific research, the principle of rigorous scrutiny applies across all fields. An overreliance on secondary sources can perpetuate misinformation and flawed interpretations, compounding errors over time. Consumers of research must be especially vigilant given the sheer volume of publications and the potential for studies to be influenced by funding sources or ideological biases. By critically assessing the validity and reliability of studies, both researchers and consumers can navigate the complex landscape of modern scholarship and avoid the pitfalls of flawed or unsubstantiated findings.

What Questions Can You Ask to Evaluate the Authority of Research Study?

  1. What are the credentials of the investigators? Are they associated with a business or university?
  2. Who funds the research?
  3. Is the research question clear and well-defined?
  4. Are the chosen methods appropriate for addressing the research question?
  5. Is there a clear description of data collection and analysis procedures?
  6. Are ethical considerations addressed, including informed consent and protection of participants?
  7. Are the limitations of the study clearly acknowledged?
  8. Are the conclusions supported by the data and analysis presented?

What Are Some Common Problems with Scholarly Methods?

Lack of Systematicity

Traditional literature reviews and historical analyses may lack thoroughness and rigor. Scholars may only cite studies that agree with them and ignore the counterarguments that any reasonable holder would consider. Thus it’s important to evaluate if the authors demonstrate knowledge of the current state of the field by citing recent and relevant literature. Once you’re an expert in a discipline, you’ll be able to identify whether the researchers are citing the correct sources — the canonical texts. Look for any important literature or references that may be missing.

Overreliance on Secondary Sources

Some researchers may not sufficiently engage with primary sources, leading to a recycling of potentially flawed interpretations.

Trust Issues

Recent discussions have highlighted growing concerns about the accuracy and reliability of scholarly interpretations. Even when citing sources in their language, scholars may misrepresent other’s work, even if not intentionally.

Accuracy and Misrepresentation

Recent discussions have highlighted growing concerns about the accuracy and reliability of scholarly interpretations. Even when citing sources in their language, scholars may misrepresent other’s work, even if not intentionally, especially when dealing with texts from different historical or cultural contexts.

Potential for Bias

Especially when engaged in argument, researchers may cherry-pick sources that support the their viewpoint, ignoring counterarguments and counter-interpretations.

Cascading Inaccuracies

Errors or misinterpretations in secondary sources can be perpetuated and amplified in subsequent works. Evaluate if the authors engage in citation justice, including diverse and representative voices in their references.. Use of AI systems such as Chat GPT or Claude may result in the proliferation of misinformation

Citation Inaccuracies

Sources may not be properly cited and credited.

What Are Some Common Problems with Quantitative Empirical Methods?

Inappropriate Research Design

Quantitative studies may suffer from research designs that do not align well with the hypotheses being tested. For example, a study analyzing how professionals use AI systems might use surveys to gather data, but direct observation could provide more accurate insights into real-world interactions with AI. An ill-suited design can lead to misleading conclusions about AI usage.

Poor Definition and Operationalization of Variables

Variables must be clearly defined and measured to ensure accurate results. For instance, in a study on AI tool usage, if “AI tool usage frequency” is not well-defined, participants might include varied tools (e.g., basic spell-checkers vs. advanced AI assistants), leading to inconsistent data and potentially invalid results.

Inadequate Quality of Procedures

Poorly designed tasks and instructions can result in unreliable data. In a study examining AI tool use, if instructions are unclear or fail to account for varying levels of familiarity with AI tools, the data may not accurately reflect true usage patterns, leading to biased or unreliable findings.

Lack of Transparency

A lack of transparency in data reporting and undisclosed conflicts of interest can undermine trust in research. For instance, if a study evaluating a technical writing tool is funded by its manufacturer, this potential bias should be disclosed to maintain the study’s credibility.

Ethical Concerns

Ethical guidelines must be followed to protect participant privacy and address potential harms. For example, in research involving employee communication practices, ensuring confidentiality and minimizing risks to participants is crucial.

Oversimplification of Complex Phenomena

Quantitative methods may reduce complex social phenomena to numerical data, losing depth and nuance. For example, using a Likert scale survey to measure user satisfaction with a technical document might overlook important qualitative insights about user experiences.

Unwarranted Causal Claims

Researchers might make causal claims without adequate evidence or control of confounding variables, leading to misleading conclusions. For example, attributing improved technical writing skills solely to a new training program without considering other factors (e.g., prior experience) can result in incorrect causal inferences.

Ignoring Limitations of Quantitative Measures

Quantitative research may over-rely on numerical data, neglecting the value of qualitative insights. For example, relying solely on survey data to assess user satisfaction may ignore valuable qualitative feedback from interviews or focus groups.

Sampling Issues

Inadequate sampling strategies can affect the representativeness and validity of results. For example, if a study uses a non-representative sample of users for evaluating a technical document, the findings may not generalize well to the broader population.

Data Misinterpretation

Quantitative data can be misinterpreted if not properly analyzed or if statistical methods are applied incorrectly. For instance, misusing statistical techniques to analyze survey data can lead to incorrect conclusions about user satisfaction or behavior.

Why is most published research wrong?

What Are Some Common Problems with Qualitative Empirical Methods?

Lack of Transparency

Researchers may fail to provide sufficient detail about their data collection and analysis processes, making it difficult for readers to evaluate the rigor of the study.

Researcher and Disciplinary Bias

he subjective nature of qualitative research can lead to unacknowledged researcher bias influencing data collection, analysis, and interpretation. Additionally, researchers may be influenced by their disciplinary lens, which shapes what they see and how they interpret data. For example, a professional and technical communication researcher might focus on document usability and information design, potentially overlooking broader organizational or cultural factors that influence communication practices.

Limited Generalizability

Qualitative methods, rooted in interpretivist or post-positivist epistemologies, typically do not aim for broad generalizations. Instead, they focus on in-depth understanding of specific contexts and experiences. The small sample sizes and context-specific nature of qualitative research align with these epistemological foundations, viewing knowledge as socially constructed and context-dependent. However, researchers sometimes inappropriately generalize their findings beyond the studied context. For example, in Janet Emig’s “The Composing Processes of Twelfth Graders,” she interviewed a specific group of high school students and then made broader claims about the writing processes of all college students. This overgeneralization is a methodological error that misrepresents the applicability of the findings.

Inadequate Data Triangulation

Triangulation involves using multiple data sources, methods, or researchers to corroborate findings. Failure to triangulate data can lead to skewed or incomplete interpretations. Triangulation is important because it enhances the credibility and validity of the research by cross-verifying information from different perspectives.

Poor Integration of Theory

Researchers may either force data to fit preexisting theories or fail to connect their findings to relevant theoretical frameworks, leading to superficial or biased interpretations.

Lack of Reflexivity

Insufficient consideration and disclosure of the researcher’s own position, biases, and influence on the research process can undermine the credibility of the study. Reflexivity involves critically examining one’s own role and potential impact on the research.

Inappropriate Extrapolation of Findings

While qualitative methods preclude broad generalization, researchers may sometimes attempt to apply findings beyond the specific context of the study. Instead of generalization, qualitative researchers should focus on transferability, clearly describing the research context and allowing readers to determine how findings might apply to similar situations.

Inadequate Protection of Participant Privacy

Failure to sufficiently anonymize data or consider the potential for participant identification, especially in small or close-knit communities, can compromise participant privacy and ethical standards.

Superficial Analysis

Some researchers may not delve deep enough into their data, resulting in surface-level interpretations that fail to capture the complexity of the phenomena studied.

Misuse of Qualitative Approaches

Applying qualitative methods to research questions better suited for quantitative approaches, or vice versa, can lead to inappropriate conclusions.

Dominance of Researcher’s Voice

Failure to adequately incorporate and represent participants’ perspectives and experiences in the final report can occur when the researcher’s voice dominates the narrative, effectively silencing the participants. This can happen when the researcher prioritizes their interpretations over direct quotes and stories from participants.

Lack of Attention to Context

Insufficient consideration or description of the social, cultural, and historical context in which the research takes place can limit the depth and applicability of the findings.

What Are Some Common Problems with Design & Innovation Methods?

Innovation Process

  • Lack of a clear problem definition
  • Insufficient exploration of alternative solutions before settling on a design direction
  • Failure to document key decision points in the design process
  • Confirmation bias: seeking to prove a preconceived solution rather than truly understanding the problem

User Research and Empathy

  • Inadequate or biased user research leading to misaligned solutions
  • Failure to involve users throughout the design process
  • Neglecting to validate user needs and pain points before ideation
  • Ignoring or misinterpreting stakeholder feedback that contradicts initial assumptions

Prototyping and Testing

  • Jumping to high-fidelity prototypes too quickly, bypassing low-fidelity testing
  • Lack of clear metrics or criteria for evaluating prototypes
  • Insufficient iteration cycles or premature convergence on a solution
  • Failure to test with a diverse range of users or in realistic contexts

Design Communication

  • Poor visualization of concepts or inability to effectively communicate ideas
  • Lack of clear rationale for design decisions
  • Failure to create a compelling narrative around the design solution
  • Inadequate documentation of the design process for replication or learning

Ethical Considerations

  • Failure to consider potential negative impacts or unintended consequences of the design
  • Lack of diversity and inclusion in the design team or user testing group
  • Insufficient attention to accessibility and universal design principles
  • Neglecting to address data privacy and security concerns in digital innovations

Evaluation and Validation

  • Lack of rigorous methods for measuring the impact or effectiveness of the design
  • Failure to establish clear success criteria at the outset of the project
  • Over-reliance on subjective feedback without quantitative data
  • Inadequate comparison against existing solutions or benchmarks

Interdisciplinary Integration

  • Failure to effectively integrate insights from relevant disciplines (e.g., psychology, engineering, anthropology)
  • Lack of systems thinking in addressing complex design challenges
  • Insufficient consideration of cultural factors in global or cross-cultural design projects.

What Are Some Common Problems with Mixed Methods?

Epistemological Conflicts

Quantitative and qualitative researchers often operate from different epistemological positions, leading to conflicts in research design and interpretation. Positivist approaches, typically associated with quantitative methods, emphasize objective measurement and generalizability. In contrast, interpretivist or post-positivist approaches, often linked with qualitative methods, focus on context-dependent, subjective understanding. These fundamental differences in views on the nature of knowledge can create challenges in integrating findings from both paradigms.

Authority and Validity Issues

Determining which data set—quantitative or qualitative—should take precedence when results conflict can be problematic. The perceived authority of different types of data varies across disciplines and audiences, potentially leading to biased interpretations or receptions of the research. This issue is particularly pronounced in fields that traditionally prioritize one methodological approach over the other.

Integration Challenges

Researchers often struggle to integrate quantitative and qualitative findings meaningfully. The risk of treating the two strands as separate studies rather than a cohesive whole is common. Successful integration requires a coherent framework that allows for the complementary strengths of both methods to inform the research questions and conclusions effectively.

Sampling Issues

Ensuring appropriate and compatible sampling for both quantitative and qualitative components can be challenging. Quantitative research typically aims for representativeness and generalizability, requiring random or stratified sampling methods. In contrast, qualitative research often focuses on depth and richness, using purposive or theoretical sampling to gather detailed insights from specific contexts. Balancing these differing sampling needs to achieve a cohesive mixed methods study can be difficult.

Data Transformation Problems

Transforming qualitative data into quantitative forms (quantitizing) or vice versa (qualitizing) can lead to the loss of nuance or misinterpretation. This process requires careful consideration to ensure that the richness of qualitative data is preserved and the precision of quantitative data is maintained, avoiding the pitfalls of oversimplification or distortion.