Zephyrnet Logo

The Limits Of AI-Generated Models

Date:

In several recent stories, the subject of models has come up, and one recurrent theme is that AI may be able to help us generate models of a required abstraction. While this may be true in some cases, it is very dangerous in others.

If we generalize, AI should be good for any model where the results are predominantly continuous, but discontinuities create problems. Unless those are found and the boundary conditions for them identified in some manner, then it is possible to get results that reflect the inadequacy of the training. These are similar to image recognition today where a very small change in inputs can yield results that leave you puzzled at best.

The takeaway from this always seems to be more data points, more complex networks, more computational power required, which reduces the advantage of the generated model. Does this accuracy problem ever go away or become acceptable? That depends upon how much is resting on the results. In some cases you may not care, but in others you do.

To me, the biggest problem is that AI is not humble. It doesn’t like saying, “I don’t know.”

A similar issue was raised by Serge Leef when he was working at DARPA and wanted to see if there was a way to make simulation run faster. One question he asked me was, “Is there a way to cache results of a functional simulation, such that the next time you are in the same state with the same inputs, can you just look up the results rather than compute them?”

The answer is yes, but can you afford the memory? For the model to be complete, you need a memory element for every possible state the design could be in, and a word width that is the same as the number of inputs. And then, yes, everything is simply a memory lookup. In most cases, you could probably collapse the memory down into a sparse matrix, but that adds to the complexity and slows down the operation, because now it is a multi-level memory lookup. In the extreme case, the necessary algorithm for maintaining the minimum memory is the functionality of the system, expressed in algorithmic form – i.e., the model.

I utilized the memory lookup technique decades ago, using ROMs to implement logic functionality, and it is similarly implemented in FPGA lookup tables. But it doesn’t scale. That is why lookup tables tend to stop at four or five inputs. It is also possible using ROMs to implement state machines, where the next address is encoded in the values held in the ROM. Again, this does not scale.

The memory scaling problem is the same issue as the one used to “size” the verification problem. It assumes that every memory bit contributes to the total state space being 2n. There are several fallacies here. First, not every state is valid. Second, and more important, most systems contain multiple independent state spaces where one system cannot affect the other.

This might lead you to see how AI could identify those regions, and that is where it would fall into the big trap. You never know for sure that they are independent. It only takes one case where that is not true to miss what could be the most important bug in your system. This is one of the biggest attractions for formal verification — that it can consider all possible inputs over time before it comes to a conclusion. Formal, of course, is not scalable.

So I have concerns about AI being used to create functional models. But what about things like thermal models? That is where things look a lot more attractive. Discontinuities in functionality will not affect thermal, which is an aggregation of the execution of lots of functionality over a long period of time. Everything gets averaged out and the worst case is your results are off by a percent or two.

It all comes down to the age-old question – what do I expect from a model? Only by understanding that question does it become possible to select the right model and to know the risks associated with using it. What is the accuracy and fidelity of the model? That has to be weighed against the cost to execute it.

It is a rare case when you get something for nothing. We live in a world of tradeoffs, where we are constantly balancing multiple issues. With model generators it will become more complicated because the time and cost of building or updating a model may not be obvious. We can again relate this back to functional verification, where we have simulators, emulators, and rapid prototyping systems. Each of them is an execution engine for a model. The simulator runs the most accurate model and has very fast turnaround time after a change, but slow execution time. The emulator executes faster, takes longer to be ready for execution after a change has been made, and cannot accurately reflect some aspects of the design. Rapid prototyping takes even longer to be ready to execute after a change, but has the fastest execution and an increased number of areas where it departs from the actual model. You can similarly add other dimensions to this, such as controllability and observability of the models across the spectrum.

The bottom line is that AI is only suitable for generating models where fidelity is not an important issue. A small inaccuracy is tolerable, just like those that would be found in an abstract model. But you have to understand where it is possible to yield significantly wrong results. And yes, abstract models can do this as well, so it is a matter of properly understanding what the qualities of the model are.

spot_img

Latest Intelligence

spot_img