Why algorithmic transparency needs a protocol?

As (algorithmic) operations are becoming more complex, we realize that less and less we can continue relying on the methods of the past where Privacy Policy or ToC served (did they?) in building trust in the business. Moreover, it rarely helped any user to understand what’s going on with their data under the hood. “I agree, I understand, I accept.” — the big lies we told ourselves when clicking on the website’s cookie notice or when ticking the checkbox of one another digital platform. In the age of artificial intelligence, the privacy and cybersecurity risks remained, but now we’re observing the expansion of the risk profiles for every service to include bias and discriminatory issues. What should we do? A typical answer is a top-down regulation brought by national and cross-national entities. Countries and trade unions are now competing for AI ethics guidelines and standards. Good. What if you’re building an international business? As a business, you have to comply. Tons of digital paperwork (thanks, now it’s digital!) — and you could get settled in one single economic space. Once you’re there, there’s a chance you can move to another one by repeating the costly bureaucratic procedure. Unfortunately, this is not scalable. We call it the “cost of compliance”, and these costs are high. There is a possible way of avoiding the compliance scalability issue: disclosing the modus operandi once and matching it with existing requirements on each market. To make it possible we need a universally-accepted concept of product disclosure.

Read my whole article on Medium to learn more about disclosure and the transparency protocol to be used in conjunction with it.

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.