Consensus Methodology

In my latest consensus study (Powell, 2020), I used the Web of Science core database to search for peer-reviewed articles on "climate change" or "global warming" published from January 1, 2019 through December 31, 2019. I found 21,813 articles. To read the articles themselves would take effectively forever. Then what about reading the abstracts? If each abstract comprised 250 words, then the total comes to 5,450,000 words. Compare that to War and Peace at 587,000. To make the job doable, I read the titles of articles and when a title suggested that the article might question AGW, I read the abstract and in some cases the article itself. To classify an article as a rejection, I looked for a clear statement that AGW is false or that some other process better explains the rise in global temperature. I did not count articles that report some discrepancy but did not use that discrepancy as the basis for claiming that AGW itself is false. Any theory has discrepancies, observations that the theory cannot yet explain. They provide the next set of research problems.

I found only a handful of titles suggesting that its authors might reject AGW, and on closer inspection none did. Thus in this survey, the consensus on AGW is 100%. The sheer number of articles tells a story in itself. If they averaged three authors each, then we could ask: would some 60,000 research scientists spend their precious time and resources on a theory that is likely false?

Logic of My Methodology

The percentage of acceptance of AGW based on the peer-reviewed literature has been shown to be more reliable than that from opinion surveys. See here.
The only practical way to assemble a database of relevant articles is to use the Web of Science, which categorizes each peer-reviewed article. I chose the topics "climate change" and "global warming." Some of the articles thus found do not appear on the surface to be about either topic. But the Web of Science has concluded that they are. If the WoS has made a mistake in some cases, then those unrelated articles will not reject human-caused global warming and thus they will not change the total number of rejections found, only the total number of articles reviewed. If instead a reviewer wanted to double-check the WoS's categorization, that reviewer would have to read every abstract and then decide whether or not it is "about" human-caused global warming, which would make the job effectively impossible. Better to assume that the Web of Science is correct. Even if the WoS categorization is wrong in some cases, it has no effect on the outcome.
It is a known and easily verifiable fact that publishing scientists rarely directly endorse the leading theory of their discipline. They take it as a given and work from there. I have reviewed hundreds of articles on plate tectonics, evolution, and meteorite impact cratering without finding a single endorsement. Cook et al. (2013) reviewed 11,944 peer-reviewed articles on AGW and found only 0.5% that directly endorsed the theory. Many of those were specifically about whether human activities are causing observed global warming, so it was natural that they made an affirming statement. These were not endorsements but research findings.
Scientists who have solid evidence against a prevailing theory will be sure to publish it. If there were such evidence against AGW, we would not have to search for it: the news would have made headlines and the discoverer would go down in history.
Putting all this together, to judge the extent of a consensus among scientists, look for peer-reviewed articles that reject the theory. If there are none, then as far as we can tell, scientists are unanimous AND there is no peer-reviewed evidence that falsifies the theory. This is incomparably better than merely asking scientists their opinion.