Coleman Hughes recently interviewed  Eliezer Yudkowsky, Gary Marcus and Scott Aaronson on the subject of AI risk.  This comment on the difficulty of spotting flaws in GPT-4 caught my eye:

GARY: Yeah, part of the problem with doing the science here is that — I think, you [Scott] would know better since you work part-time, or whatever, at OpenAI — but my sense is that a lot of the examples that get posted on Twitter, particularly by the likes of me and other critics, or other skeptics I should say, is that the system gets trained on those. Almost everything that people write about it, I think, is in the training set. So it’s hard to do the science when the system’s constantly being trained, especially in the RLHF side of things. And we don’t actually know what’s in GPT-4, so we don’t even know if there are regular expressions and, you know, simple rules or such things. So we can’t do the kind of science we used to be able to do.

This is a bit similar to the problem faced by economic forecasters.  They can analyze reams of data and make a recession call, or a prediction of high inflation.  But the Fed will be looking at their forecasts, and will try to prevent any bad outcomes.  Weather forecasters don’t face that problem.

Note that this “circularity problem” is different from the standard efficient markets critique of stock price forecasts.  According to the efficient markets hypothesis, a prediction that a given stock is likely to do very well because of (publicly known) X, Y or Z will be ineffective, as X, Y and Z are already incorporated into stock prices.  

In contrast, the circularity problem described above applies even if markets are not efficient.  Because nominal wages are sticky, labor market are not efficient in the sense that financial markets are efficient.  This means that if not for the Fed, it ought to be possible to predict movements in real output.   

Before the Fed was created it might have been possible to forecast the macroeconomy.  Thus an announcement of a gold discovery in California could have led to forecasts of faster RGDP growth in 1849.  There’s no “monetary offset” under the gold standard.  This suggests that moving to fiat money ought to make economic forecasting less reliable than under the gold standard.  Central bankers would begin trying to prove forecasters wrong.

We tend to assume that fields progress over time, that we are smarter than our ancestors.  But the logic of discretionary monetary policy implies that we should be worse at economic forecasting today than we were 120 years ago.

Recall this famous anecdote:

During a visit to the London School of Economics as the 2008 financial crisis was reaching its climax, Queen Elizabeth asked the question that no doubt was on the minds of many of her subjects: “Why did nobody see it coming?” The response, at least by the University of Chicago economist Robert Lucas, was blunt: Economics could not give useful service for the 2008 crisis because economic theory has established that it cannot predict such crises.¹ As John Kay writes, “Faced with such a response, a wise sovereign will seek counsel elsewhere.” And so might we all.

If Robert Lucas had successfully predicted the 2008 crisis it would have meant that he would not have deserved a Nobel Prize in Economics.

PS.  I highly recommend the Coleman Hughes interview.  It’s the best example I’ve seen of a discussion of AI safety that is pitched at my level.  Most of what I read on AI is either too hard for me to understand, or too elementary.

PPS.  The comment section is also interesting.  Here a commenter draws an analogy between those who think an AI can only become more intelligent by adding data (as opposed to self-play) and people who believe a currency can only have value if “backed” by a valuable asset.

Yet another prevalent (apparently) way people think about the limitations of synthetic data is that they think it’s like how prompting can bring out abilities a model already had, by biasing the discussion towards certain types of text from the other training data. In other words, they are claiming that it never adds any fundamentally new capabilities to the picture. Imagine claiming that about a chess-playing system trained through self-play…

Many of these wrong ways of looking at synthetic data sort of remind me of people not grokking how “fiat currency” can have value. They think if it’s not backed by gold, say, then the whole house of cards will come crashing down. The value is in the capability it enables, the things it allows you to do, not in some tangible, external object like gold (or factual knowledge).

 



Source link

Previous articleSmart Money Podcast: Options Trading, Short Sales and Derivatives
Next articleZolve Connect Launches in Partnership With Gigs to Support Expats Moving to the US

LEAVE A REPLY

Please enter your comment!
Please enter your name here