🇯🇵 日本語 🇬🇧 English 🇨🇳 中文 🇲🇾 Bahasa Melayu

The Pitfall of AI-Driven Organizational Change: How Data Dependence Creates “Irreversible HR”

The Management Challenge of “Employee Satisfaction” and AI as the Solution

Approximately 90% of companies with 50 or more employees consider “improving employee satisfaction” a crucial management issue. This survey result reflects the reality that many executives are grappling with talent retention and vitality. Meanwhile, advertising giant Dentsu has proposed a new “solution” to this challenge. In collaboration with Dentsu Institute, they have begun offering an “HR×AI Organizational Transformation Program” that uses Microsoft 365 usage data as a starting point.

Number of emails sent/received, meeting times, chat frequency, file-sharing networks… AI analyzes the digital data generated from daily work to “visualize” the actual state of organizational communication and potential issues. At first glance, no form of HR management seems more rational and objective. Decisions based on data appear far more “reversible” than traditional personnel evaluations reliant on a manager’s intuition or a few voices.

However, a major pitfall lurks here. Data tells us “what is happening,” but not “why it is happening.” And the moment this “why” is lost, management decisions, cloaked in the guise of “data-backed correctness,” begin to lose their reversibility.

When Data Leads to “Outsourcing Judgment”

Dentsu’s program promises to visualize the “invisible reality” of an organization by combining HR and AI. It certainly can be a powerful tool for discovering “structural problems” like communication breakdowns between departments or work overload on specific individuals.

The problem lies in the “next step.” What actions should be taken regarding the “trends” or “issues” the data indicates? The mistake many organizations make here is accepting the results of data analysis as the direct “answer,” leading straight to “hard-to-reverse decisions” like personnel transfers or system changes.

For example, suppose AI analyzes that “communication between Department A and Department B is extremely low.” From a perspective of reversible management, this is merely a “hypothesis.” To preserve reversibility, one must first take the seemingly inefficient step of asking “people,” not data, “why is it low?” Is it simply unnecessary due to no overlapping work, or is personal conflict creating a barrier? The appropriate action differs entirely based on the cause.

If data cannot be elevated from a “basis for judgment” to a “starting point for inquiry,” AI analysis becomes merely a device for “outsourcing” the manager’s own observation and consideration. Outsourced judgments, even if their results are undesirable, become difficult to correct due to the “objective evidence” of data and become fixed within the organization.

When the Yardstick for “Satisfaction” Distorts Reality

Another danger is that only measurable data narrows the definition of “employee satisfaction.” What Microsoft 365 can measure is merely the “digital footprint of activity.” The creative ideas born from a conversation during a walk, the trust between colleagues who consider each other’s family circumstances, the shared sense of accomplishment from overcoming a difficult project. These “analog, immeasurable elements” that form the core of an organization’s health and vitality sink into the sea of data and become invisible.

And when management’s eyes start chasing only “satisfaction metrics measurable by data,” the workplace begins to “game” the system to improve those metrics. Inflating chat numbers, setting up meaningless meetings. A paradox arises where data distorts people’s behavior, pushing them further from the reality that needs improvement. This is the beginning of an “irreversible vicious cycle” that, once started, is hard to escape.

How to Use “HR×AI” to Preserve Reversibility

So, how can we use such AI tools not as weapons that solidify judgments, but as probes that enhance reversibility? The key lies in strictly designing the “evaluation period” and “observation points” for tool implementation beforehand.

First Principle: Define Data as a “Hypothesis Generator”

When introducing the program, form a clear agreement between management and HR: “The analysis results output by this AI are not ‘facts’ but ‘hypotheses to be investigated.'” Anomalies or trends indicated by data must be a “trigger” for humans to directly investigate the cause. The moment this principle breaks, dependence on data begins.

Second Principle: Prioritize a “List of Questions” Over a “Dashboard”

Tools often tend to present beautiful dashboards and achievement levels for numerical targets. However, the output reversible management should seek is not a single number like “This month’s employee engagement score is 75 points.” It is a concrete “list of questions” such as: “Late-night email volume in Department X increased 50% month-on-month. What does this mean?” “File sharing among Project Y stakeholders is extremely low. What are the barriers to information sharing?”

Based on this list, managers engage in dialogue with their teams. This dialogue itself reveals realities invisible to data alone and provides material for designing the next action as an “experiment.”

Third Principle: Execute Actions as “Time-Limited Experiments”

Suppose data and dialogue reveal “insufficient communication” as an issue. Here, one must not immediately introduce large-scale reorganization or permanent new systems. First, design a small, clearly time-limited “experiment” whose effects can be verified upon completion, such as “Establish a 30-minute information exchange every Friday for 3 months.”

The success or failure of this experiment is evaluated from both data again (e.g., increase rate in post-meeting chats) and the raw voices of participants. If ineffective, simply end it and try another hypothesis. This very “loop of trial and error” enhances the organization’s learning capability and avoids the “irreversible decisions” born from rigid HR systems.

Using Data to Avoid Reducing Problems to “People Problems”

The greatest value of AI analysis tools like Dentsu’s program lies in their power to direct attention to “work structure and design” before attributing problems to “individual ability or personality.” For example, when data shows work concentrated on a specific individual, it becomes a trigger to question the fundamental work process—”Why isn’t the work designed to be distributed?”—rather than hastily concluding “that person is insufficient.”

This can be a powerful guideline for practicing the fundamental principle our media advocates: “Look at the work, not the people.” Because data suggests “flaws in work structure,” excluding emotion and speculation.

However, the moment that guideline is mistaken for the “answer,” the tool strips away human judgment and creates new rigidity. What management should do is not delegate judgment to AI. It is to use the “new perspective” AI provides as material to deepen more human consideration, repeat small, highly reversible experiments, and thereby improve the “quality of judgment.”

Conclusion: The “Dialogue” Between AI and Humans Creates Reversible HR

Improving employee satisfaction is not a “metric” to be managed by data, but a “result” cultivated through daily dialogue and trial and error. Advanced tools like Dentsu’s “HR×AI” program should be positioned as “surprise-giving partners” to enhance the quality of that dialogue.

During the tool’s evaluation period, observe whether its output is heading towards “fixed answers” or generating “active questions.” When data halts human consideration, that is the first step towards “irreversible HR.”

Truly reversible management is not about using technology’s power to make judgment easier. It is about utilizing technology to create “room for judgment”—to formulate more hypotheses, experiment more safely, and learn faster.

Comments

Copied title and URL