Ai-based Insights from Worker Conversations: The Ethics of Using vs Not Using

“Like a race car, AI is a powerful new technology that’s capable of so much, yet it has to be used responsibly. Just like you wouldn't drive a new race car without brakes, dashboards, etc. we will need standards, guardrails, and to monitor AI tools continuously. It’s not just about speed and performance. We have to build trust and operate AI safely and responsibly.”  - Guru Sethupathy

For Business and HR leaders, the core value proposition of artificial intelligence (Ai) is that it can perform tasks more efficiently and/or effectively than humans or the machines that preceded it.  This automation of certain work thus frees capacity to focus on, theoretically, more value-added work, work that can often only be done by us humans.  

Ai-based tools can also do work that hasn’t even been attempted. Such work may have been deemed too time consuming, expensive, or laborious… or maybe it wasn’t even thought possible.    

Here’s where we are with Employee Listening, a term, frankly, I use with great caution.  On the one hand, we all like to be heard when we are consciously communicating.  On the other hand, none of us, I presume, like to be eavesdropped on unknowingly.  I sense most would agree that doing so in a cafe, restaurant, or other social setting would be highly improper.  It then follows that eavesdropping at scale would also be improper. This being said, if leaders openly state that conversational tone and frequency is going to be measured ongoing, then there must be a compelling reason, a reason that, at the very least, will not compromise trust and that, in the best case, will elevate trust.

Here’s where it gets tricky, though, particularly for those who are older (define this for yourself) and who’ve thus long defined what’s appropriate in terms of privacy and a generationally-rooted code of ethics.  Those who’ve grown up in the digital age, however, have different parameters around privacy and ethics. 

Millennials (born between 1981 and 1996) and Gen Z (born between 1997 and 2012) have grown up in the digital age. They’ve been both data generators and data consumers from the start of their adult lives and, in the case of Gen Z, long before. As such, research has shown that millennials and Gen Z are more comfortable sharing personal information online than their older counterparts. A study conducted by the Pew Research Center found that 56% of millennials and 48% of Gen Z believe that sharing personal information online is "just part of life". Gen X (born between 1965 and 1980) and Baby Boomers (born between 1946 and 1964) feel differently. Only 40% of Gen X and only 30% of Baby Boomers felt that sharing personal information online is “just part of life”.

But what about the workplace and, specifically, “listening” to conversations? This is where I believe the widely-used phrase of “Employee Listening” is a misnomer. Sentiment Analysis is better, yet for many it lacks gravitas and actionability. Even so, conversations is where real life happens. Conversations promote feelings of safety, empowerment, inclusion, and belonging. Conversations can also compromise these things. As such, if insights into the sentiments of these conversations can be gained, is there not a responsibility to do so? Through insight-based communication, policy, and process improvements, positive, beneficial sentiments can be perpetuated and even propagated. Through these same means negative, high-risk sentiments can be addressed before the lagging indicators (voluntary turnover, disengagement, etc.) highlight their presence.

Of course, at this point in time there’s no right or wrong in terms of whether or not to utilize Sentiment Analysis (if I may transition to this phrase). If it’s done, however, what it’ll require is an openness to learn, explore, take appropriate action, and communicate… and communicate, communicate, and communicate some more.

But why bother?  “We haven’t done this before.”  “Why do we need to do it now?”  “It’s too risky.”  “We’re not there yet.”  I’ve heard these and similar sentiments for years, and I’m still hearing them in the wake of ChatGPT’s launch.  Since then Microsoft has announced Microsoft 365 Copilot and Google has, just days ago, launched its enhanced Ai chatbot.  As many have said over the years, “The future is here.  It’s just not widely distributed yet.”  With this in mind, those that don’t utilize sentiment analysis will, in my view, be at a competitive disadvantage — and they’ll be at a competitive disadvantage immediately, not a year or two down the road.

How does all this relate to organizational culture, the employee experience, safety, and diversity, equity, and inclusion? 

There’ll be more to come on answering this very big question.  For now, we all have to recognize that there’s a massive value proposition to more accurately, quickly, and frequently understand the employee experience at work.  Again, doing so offers leading indicators of safety, inclusion, wellbeing, engagement, productivity, and other important constructs.  

Imagine you have the ability to know if bullying is going on within the organization for which you’re responsible and you chose, as the leader, to not mitigate the risk of this happening.  “Al (my name), this just doesn’t happen in our organization.”  If this is your internal narrative, I simply ask, “How do you know?”  Are you trusting your employee surveys which, as valuable as they are (and, yes, they are valuable and will remain so) have shortcomings.  Most survey-based insights are lagging indicators of past experiences.  Even forward projecting survey items like feelings of confidence have, by their very nature, personal biases — personal world views and feelings that are continually influenced by external factors, including other people.

What would be better?  Having an always on dashboard, along with prompts, that provide real time sentiment across an organization.  This, then, would enable leaders to respond proactively and accurately.  It would help inspire meaningful interventions that could elevate the constructs mentioned before: safety, inclusion, wellbeing, engagement… even trust.  If such a system isn’t implemented, then no such interventions can be done, at least not with as much confidence.  Leaders would largely be guessing, or at least (potentially) overly basing their actions on lagging indicators or hearsay. 

As I hope is coming across here, I feel we need to move away of the legacy mindset that leans towards “No way.  Doing this isn’t right. It isn’t ethical.” and move to a newer, more open mindset that leans towards “Damn. Not doing this isn’t right. The ethical thing is to get this insight and act on it. In fact, it’s the responsible thing to do,, both for our employees and the organization.”  

What needs to happen for this to be done right?  Satya Nadella, CEO of Microsoft, shares, "Ai can help organizations create a more inclusive and equitable workplace by identifying patterns of bias and discrimination. However, we must ensure that Ai is used responsibly and transparently, with a focus on the employee experience."  Alejandro Martinez, CEO of Erudit.ai, elaborates on how this can be done, “It’s important to involve employees in the development and implementation of AI tools to ensure their trust and buy-in." How to involve employees in the development, deployment, and ongoing use of Ai tools will be the subject of future articles and discussions.  Follow me and click the bell here on LinkedIn to be notified of the next article on this very important topic, a topic that’s here to stay.


Related:

Next
Next

What is Digital HR and why is it important?