Thursday, November 21, 2024

High Quality Research.

HomeArticleArtificial Intelligence - Where exactly are the risks of AI ML and...

Artificial Intelligence – Where exactly are the risks of AI ML and why it should NOT be fully regulated?

Our Assumption about Artificial Intelligence Models being perfect

We all assume that Artificial Intelligence (AI and ML) models are perfect or at least tending towards perfection. Our belief is based on the fact that nothing else has been trained on billions of data points. Hence, the chances of errors are low.

What we don’t see?

Bad people (business leaders, politicians, PR agencies) can force the developers (who create the model and then periodically refine the model) to artificially change the weightage using many manipulative ways

Use more of a type of data that the manipulator wants. eg. if the model is being created for credit analysis of a customer, the input data can be manipulated by using only the ‘credit not repaid’ data of a specific group of people (based on religion, region, class, caste, or age group). Though the actual data might have a regular normal distribution, this specific data set might include data only of those people who cheated the banks in the past.

So next time a person belonging to the same group tries for a loan, AI will warn the decision maker that this customer is VERY HIGH RISK. But it will not reveal the real reason. At most, it will give the reason ‘Based on available data… ‘.

These kinds of risks do exist.

Why do you say this is NOT a huge risk?

If the original model source code and all subsequent refinement sourced and data are open-sourced, then any partiality/favoritism can be caught.

Why should Artificial Intelligence not be regulated?

It is because regulation will allow a small subset to create very powerful models (like very rich and influential companies) but will stop regular people from creating powerful models (by regulating). This is a bigger risk because the rich and influential can use it to harm regular people very easily. This will lead to the same issue – the rich gets richer, poor gets poorer. In this context, this means – the rich (who are not regulated) will keep building stringer models, whereas poor (who are stopped and regulated) are forced to build worse models. This is the actual risk.

If there is no regulation, any negative models can be thwarted with an equal and opposite positive model.

server, cloud, development

What is the solution?

Any model that is used for public purposes like – policy decision-making, deep fake, etc. should be open-sourced.

How to tackle AI deep fakes?

For such known specific issues, we can ask model; developers to add watermarks etc. so that we can know what is real and what is fake.

Why even develop Models?

Models have the potential to create a very fair and just society compared to anything in the past because the decision can be made based on actual real data. We only need to ensure that bad people cannot do harm by forcing everyone to open-source the models. Not personal / company models, but only those that will be used in public. The moment they open source, the risks are zero.

Will there be big issues every now and then?

Initially, there can be issues like the deepfake menace we see now. But with every iteration, we can make the whole ecosystem better and reduce the risks drastically.

Who are the real potential bad people?

The very rich – because they have access to money that can get them both training data and resources (most intelligent workforce, most powerful hardware). Hence what they create will be very powerful and can be used for both good and bad equally. (Once the open source it, the risk becomes zero because we can know if they misuse their models). Regular people can also become bad actors by using very specific models to fool other people – hence open source is needed for them as well.

If everything is open sourced then where is the incentive?

There should be an easy mechanism to copyright models so that in case someone else uses the code to replicate a model, it can be found out and stopped. Since the replicated code will also be open-sourced, we can automatically find such unauthorized usages.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments