Using this system of organizing information, they generate responses to prompts—building them, word by word, to create sentences, paragraphs and even large documents by simply predicting the next most appropriate word.
We used to think that there had to be a limit to the extent to which LLMs could improve. Surely, there was a point beyond which the benefits of increasing the size of a neural network would be marginal at best. However, what we discovered was that there was a power-law relationship between an increase in the number of parameters of a neural network and its performance.
The larger the model, the better it performs across a wide range of tasks, often to the point of surpassing smaller, specialized models even in domains they were not specifically trained for. This is what is referred to as the scaling law thanks to which artificial intelligence (AI) systems have been able to generate extraordinary outputs that, in many instances, far exceed the capacity of human researchers.
But no matter how good AI is, it can never be perfect. It is, by definition, a probabilistic, non-deterministic system. As a result, its responses are not conclusive but just the most statistically likely answer. Moreover, no matter how much effort we put into reducing AI ‘hallucinations,’ we will never be able to eliminate them entirely. And I don’t think we should even try.
After all, the reason AI is so magical is because of it’s fundamentally probabilistic approach to building connections in a neural network. The more we constrain its performance, the more we will forgo the benefits that it currently delivers.
The trouble is that our legislative frameworks are not designed to deal with probabilistic systems like these. They are designed to be binary—to clearly demarcate zones of permissible action, so that anyone who operates outside those zones can be immediately held liable for those transgressions. This paradigm has served us well for centuries.
Much of our daily existence can be described in terms of a series of systematic actions, those that we perform in our factories or in the normal course of our commercial operations. When things are black-or- white, it is easy to define what is permissible and what is not. All that the person responsible for a given system needs to do in order to avoid being held liable is ensure that it only performs in a manner expressly permitted by law.
While this regulatory approach works in the context of deterministic systems, it simply does not make sense in the context of probabilistic systems. Where it is not possible to determine how an AI system will react in response to the prompts it is given, how do we ensure that the system as a whole complies with the binary dictates of traditional legal frameworks?
As discussed above, this is a feature, not a bug. The reason AI is so useful is precisely because of these unconventional connections. The more AI developers are made to use post-training and system prompts to constrain the outputs generated by AI, the more it will shackle what AI has to offer us. If we want to maximize the benefits that we can extract from AI, we will have to re-imagine the way we think about liability.
We first need to recognize that these systems can and will perform in ways that are contrary to existing laws. For one-off incidents, we need to give developers a pass—to ensure they are not punished for what is essentially a feature of the system. However, if the AI system consistently generates harmful outputs, we must notify the persons responsible for that system and give them the opportunity to alter the way the system performs.
If they fail to do so even after being notified, they should be held responsible for the consequences. This approach ensures that rather than being held liable for every transgression in the binary way that current law requires, they have some space to manoeuvre while still being obliged to rectify the system if it is fundamentally flawed.
While this is a radically different approach to liability, it is one that is better aligned with the probabilistic nature of AI systems. It balances the need to encourage innovation in the field of AI while also holding persons responsible for these systems liable when systemic failings occur.
There is, however, one category of harms that might call for a different approach. AI systems make available previously inaccessible information and explain it in ways that ensure that even those unskilled in the art can understand it. This means that potentially dangerous information is more easily available to those who may want to misuse it.
This is referred to as the Chemical, Biological, Radiological and Nuclear risks (‘CBRN risks’) of AI and AI could make it much easier for persons with criminal intent to engineer deadly toxins, deploy biological weapons and initiate nuclear attacks.
If there is one category of risk that deserves a stricter liability approach, it is this. Happily, this is something that responsible AI developers are deeply cognizant of and are actively working to ensure.
#update #legislative #frameworks #age #artificial #intelligence