Technological Responsibility by Design

In this series, Cardinal Direction explores the most polarized questions in AI and qualitative research. Each week, we’ll unpack a tension shaping the future of our field — from questions of ethics and power dynamics to meaning and human connection.
These posts aren’t about taking sides; they’re about finding balance — and remembering that meaning, in all its complexity, still depends on us.

Technological Responsibility by Design:

Cultivating awareness and intention in how we use AI tools in qualitative research

In a recent conversation, a colleague introduced me to the work of Don Norman, specifically Emotional Design: Why We Love (or Hate) Everyday Things. I've been following the social discussions surrounding AI and its integration into our personal and professional lives. Like many contemporary topics, these conversations often become polarized— a rhetoric I challenge because these types of discussions rarely lead to effective solutions.

 With that in mind, I began exploring Norman’s ideas on emotional design more deeply. While reading his earlier work, one paragraph stood out to me:

The computer can be thought of from the perspective of its technology—from the field of computer science. Or it can be thought of as a social tool, a structure that will change social interaction and social policy, where the goals and intentions of the user become of primary concern. It can be viewed from the experience of the user, a view that changes considerably with the task, the person, the design of the system. The field of human-computer interaction needs all these views, all these issues, and more besides.

That got me curious. AI is not the first technological innovation to reshape our social dynamics. The radio, television, computers, and the internet all turned our human world upside down. So, what if I replaced the word “computers” with “artificial intelligence”:

AI can be thought of from the perspective of its technology—from the field of computer science. Or it can be thought of as a social tool, a structure that will change social interaction and social policy, where the goals and intentions of the user become of primary concern.

For me, the same insights ring true: we can think of AI in terms of macro-level change (sociotechnological), but we can, and should, also think of it through the lens of social interactionism, or the way meaning is created through our interactions with others… or in this case, with technology. And if we are repeating familiar patterns in our human relationship with technology, it makes sense to revisit the principles that inform emotional design: visceral, behavioral, and reflective.

Applying Emotional Design to AI in Qualitative Research

The following outline applies Emotional Design to AI in qualitative research to open a discussion that crosses industry and sector. Each section includes reflection questions meant to prompt AI users, qualitative researchers, and designers to consider how their visceral, behavioral, and reflective responses influence decisions about when, why, and how to use AI in their work. Because, as Dr. Norman noted almost 25 years ago, this relationship is not to be siloed; rather, it is “a pluralistic field” that requires perspectives from design, social science, and technology alike.  

1. Visceral Level: First Impressions and Emotional Reaction

The visceral level of emotional design deals with instinctive, aesthetic, and sensory responses — how something feels upon first encounter. I explored this in an earlier blog post, How Do I Know Which Qualitative Data Analysis Software is Right for Me?

In qualitative research, visceral design matters because:

  • Researchers and participants must trust and feel comfortable with AI tools.

  • AI tools' interface, tone, and transparency shape perceptions of credibility and reliability.

Example:

Qualitative data analysis software might use clean visuals and human-like prompts that clearly indicate when the system is generating or summarizing content. In chatbot interactions, methodical language (i.e., question-response) should also feel natural and efficient. Even subtle design choices—such as using color gradients or icons to represent emerging themes—can make the process of discovery feel intuitive and emotionally satisfying, evoking curiosity and relief instead of frustration. In my opinion, I have found MAXQDA Tailwind appeals to my emotional response. While I know I’m technically interacting with a machine, the placement and tone of AI tools feel intentional and unobtrusive, which is very different from the unsolicited, transactional feel of a customer service chatbot that pops up on a webpage.

How do your first emotional responses to an AI tool—its look, tone, or behavior—shape your sense of trust or discomfort in using it for research?

2. Behavioral Level: Usability and Function

The behavioral level focuses on usability, function, and how effectively a tool helps people achieve their goals. It's about trust through performance.

In qualitative research, behavioral design matters because:

  • Researchers need AI tools to work seamlessly within existing workflows, not disrupt them.

  • AI should enhance the researcher's interpretive skill, not overrule it.

Example:

When AI codes data, it should display how and why decisions were made, allowing researchers to trace its logic or adjust parameters. Outputs such as summaries or themes should align with familiar frameworks like grounded theory or thematic analysis, so researchers retain conceptual control. Features that allow for easy revision or annotation of AI-generated codes strengthen trust through performance and reinforce a sense of agency. A well-designed interface might even show source excerpts alongside generated themes—inviting validation and collaboration rather than passive acceptance. Here is a video demonstrating the usability and function of AI tools in ATLAS.ti: https://www.youtube.com/watch?v=QJVC1h2qzv4

 When you use AI in your research workflow, do you feel that the tool enhances your interpretive process—or that you’re adapting to its limitations? What design choices influence that experience?

3. Reflective Level: Meaning and Identity

The reflective level relates to personal values, cultural context, and the meanings users assign to tools and experiences. It's about how we feel about what we've done and how we used the technology.

In qualitative research, reflective design is the most profound:

  • Researchers often identify deeply with their interpretive craft.

  • AI must respect that qualitative inquiry is humanistic, built on empathy, reflexivity, and meaning-making.

Example:

There is significant room for improvement in how QDAS communicates AI data privacy. To my knowledge, most platforms include a privacy statement on their websites, but these are often difficult to locate. In contrast, AI add-ons are prominently advertised on the front page. It would be helpful if the AI tools included a brief data privacy summary directly in its description, with a hyperlink to the full statement (see MAXQDA website, mid-page). Qualitative researchers, myself included, take the utmost responsibility with the data we collect, because we work with participants who vulnerably share their lived experiences and personal meanings. High levels of transparency are therefore essential—not only from an ethical perspective but also to set cultural expectations around responsible AI use in research.

 Also, these statements tend to focus on data storage or deletion. Given growing concerns about the environmental impact of AI, it would also be appropriate for QDAS developers to acknowledge this issue. Transparency on both privacy and sustainability aligns with the values of qualitative research, which is grounded in empathy, reflexivity, and thoughtful engagement with meaning.

How does the availability of data privacy statements and ethical practices influence your decisions about when, why, and how to incorporate them into your qualitative research?

Conclusion: Toward Technological Responsibility by Design

When we apply the principles of emotional design to AI in qualitative research, we shift the conversation from fear & fascination to responsibility & relationship. Emotional design reminds us that our choice to use technology is never neutral. It is shaped by how we think, feel, and interact.

This means that, as qualitative researchers, we have a hand in designing and using AI tools in ways that respect the humanistic foundations of inquiry: empathy, curiosity, and reflexivity. It also means resisting polarized narratives that label AI as either savior or threat. Instead, creating a discourse around the use of AI as a participant in a shared ecosystem of meaning-making that requires our continual attention, interpretation, and care.

Technological responsibility, then, is not about perfect control over innovation; it’s about cultivating awareness and intention in how we build and use these tools to balance logic and feeling, analysis and ethics, and human and machine.

Of course, there is always more to think about and discuss. If you are interested in going a bit further down the rabbit hole, here are a few resources that share different perspectives on the topic: 

Acknowledgements:

I am grateful to Trena Paulus and Jules Sherman for their thoughtful feedback and suggestions, which greatly improved the clarity and depth of this post.

To cite this post, please use the following citation:

Bower, K. L. (2025, October 20). Technological responsibility by design: Cultivating awareness and intention in how we use AI tools in qualitative research. Cardinal Direction Blog. www.followyourcardinal.com/blog/technological-responsibility-by-design

 

Previous
Previous

Whose Knowledge Counts?

Next
Next

Hybrid Storytelling: When Tech Meets Human Voice