
Overview
Having worked both in consulting and corporate environments for many years and across multiple industries, we’re at an interesting juncture in how technology is leveraged at a macro-level and the broader business and societal impacts of those choices. Whether that is AI-generated content that could easily be mistaken for “real” events to the leverage of data collection and advanced analytics in various business and consumer scenarios to create “competitive advantage”.
The latter scenario has certainly been discussed for quite a while, whether it is in relation to managing privacy while using mobile devices, trusting a search engine and how they “anonymize” your data before potentially selling it to third parties, or whether the results presented as the outcome of a search (or GenAI request) is objective and unbiased, or being presented with some level of influence given the policies or leanings of the organization sponsoring it.
The question to be explored here is: How do we define the ethical use of technology?
For the remainder of this article, I’ll suggest some ways to frame the answer in various dimensions, acknowledging that this isn’t a black-and-white issue and the specifics of a situation could make some of the considerations more or less relevant.
Considerations
What Ethical Use Isn’t
Before diving into what I believe could be helpful in framing an answer, I wanted to clarify what I don’t consider a valid approach, which is namely the argument used in a majority of cases where individuals or organizations cross a line: We can use technology in this way because it gives us competitive advantage.
Competitive advantage can tend to be an easy argument in the interest of doing something questionable, because there is a direct or indirect financial benefit associated with the decision, thereby clouding the underlying ethics of the decision itself. “We’re making more money, increasing shareholder value, managing costs, increasing profitability, etc.” are things that tend to move the needle in terms of the roadblocks that can exist in organizations and mobilizing new ideas. The problem is that, with all of the data collected in the interest of securing approval and funding for initiative, I haven’t seen many cases where there is a proverbial “box to check” in terms of the effort conforming to ethical standards (whether that’s specific to technology use or otherwise).
What Ethical Use Could Be
That point having been stated, below are some questions that could be considered as part of an “ethical use policy”, understanding that not all may have equal weight in an evaluation process.
They are:
- Legal/Compliance/Privacy
- Does the ultimate solution conform to existing laws and regulations for your given industry?
- Is there any pending legislation related to the proposed use of technology that could create such a compliance issue?
- Is there any industry-specific legislation that would suggest a compliance issue if it were logically applied in a new way that relates to the proposed solution?
- Would the solution cause a compliance issue in another industry (now or in legislation that is pending)? Is there risk of that legislation being applied to your industry as well?
- Transparency
- Is there anything about the nature of the solution that, were it shared openly (e.g., through a press release or industry conference/trade show) would cause customers, competitors, or partners/suppliers to raise issues with the organization’s market conduct or end user policies? This can be a tricky item given the previous points on competitive advantage and what might be labeled as “trade secret” but potentially violate anti-trust, privacy, or other market expectations
- Does anything about the nature of the solution, were it to be shared openly, suggest that it could cause trust issues between customers, competitors, suppliers, or partners with the organization and, if so, why?
- Cultural
- Does the solution align to your organization’s core values? As an example, if there is a transparency concern (above) and “Integrity” is a core value (which it is in many organizations), why does that conflict exist?
- Does the solution conform to generally accepted practices or societal norms in terms of business conduct between you and your target audience (customers, vertical or horizontal partners, etc.)?
- Social Responsibility
- Does the solution create any potential issues from an environmental, societal, or safety standpoint that could have adverse impacts (direct or indirect)?
- Autonomy and Objectivity
- Does the solution provide an unbiased, fact-based (or analytically-correct) outcome, free of any potential bias, that can also be governed, audited, and verified? This is an important dimension to consider given the dependency we have on automation continues to increase and we want to be able to trust the security, reliability, accuracy, and so on of what that technology provides.
- Competitive
- If a competitor announced they were developing a solution of exactly the same nature as what is proposed, would it be comfortable situation or something that you would challenge as unethical or unfair business practice in any way? Quite often, the lens through which unethical decisions are made is biased with an internal focus. If that line of sight were reversed and a competitor was open about doing exactly the same thing, would that be acceptable or not? If there would be issues, likely there might be cause for concern in developing the solution yourself
Wrapping Up
From a process standpoint, a suggestion would be to take the above list and discuss it openly in the interest of not only determining the right criteria for you, but also to establish where these opportunities exist (because they do and will, the more analytics and AI-focused capabilities advance). Ultimately, there should be a check-and-balance process for ethical use of technology in line with any broader compliance and privacy-related efforts that may exist within an organization today.
Ultimately, the “right thing to do” can be a murky and difficult question to answer, especially with ever-expanding tools and technologies that create capabilities a digital business can use to its advantage. But that’s where culture and values should still exist, not simply because there is or isn’t a compliance issue, but because reputations are made and reinforced over time through these kinds of decisions, and they either help build a brand or can damage it when the right questions aren’t explored at the right time.
It’s interesting to consider, as a final note, that most companies have an “acceptable use of IT” policy for employees, contractors, and so forth, in terms of setting guidelines for what they can or can’t do (e.g., accessing ‘prohibited’ websites / email accounts or using a streaming platform while at work), but not necessarily for technology directed outside the organization. As we enter a new age of AI-enabled capabilities, perhaps it’s a good time to look at both.
I hope the ideas were worth considering. Thanks for spending the time to read them. Feedback is welcome as always.
-CJG 10/25/2024