Welp, there go my dreams of acquiring 17 PhDs in fields I know nothing about by ChatGPT-ing my way into science. Thanks for nothing, Suzi!
Yeah that finding of "As an AI language model" phrases in scientific texts doesn't instill confidence. That's potentially another downside of using AI extensivelly in research: reduced public trust due to (often simply perceived) lack of rigor. In a world where a not-insignificant percentage of people already believe that vaccines cause autism, polticians are lizard people, and Bill Gates is installing 5G chips into us via Covid jabs, anything that casts doubt on the scientific method is bound to do more harm.
Trust us people to take something with as much upside potential as AI and then use it in the laziest possible way.
Good points! I've been recently thinking a lot about reduced public trust in institutions (not just science). This might be the biggest concern an AI-augmented world faces. It's difficult to see how a society functions without trust in its institutions.
Laziness, is sure to reduce trust. But I wonder how much other forces, like misinformation or flat-out false information, will contribute to a breakdown in trust.
I was going to ease into it with 17 PhDs just to see how it felt. Then I could always bump that up to a more reasonable number like 124 or so.
And yeah for sure, scientific papers using ChatGPT to draft stuff is probably the least of our worries when deep fakes and mass-produced disinformation are a thing.
The eternal optimist in me hopes that the end result will be a more skeptical society (in a healthy way). Just as we learned to question the authenticity of photos during the Photoshop era, we might start doing the same with any potentially AI-generated content.
But general erosion of trust and a sort of apathy is also a likely outcome. Fun times to be living in!
First, to get it off my chest and out of the way, I'm increasingly weary of the "AI is polluting our culture" argument. Leaving that aside for now...
More importantly, why don't we examine the commonly held assumption that more and better science should obviously be our goal? Here are two articles that question that assumption:
An idea that we should seek ever more knowledge ever more quickly assumes, typically with little if any questioning, almost as a matter of religious faith, that human beings can successfully manage ANY amount of knowledge delivered at ANY rate.
We seem to have conveniently forgotten that we are the species with thousands of massive hydrogen bombs aimed down our own throats, an ever present immediate existential threat, that we typically find too boring to bother discussing.
More science without limit is like walking up to a clearly insane homeless person on the street, and handing them a loaded shotgun. What could possibly go wrong, right?
Hi Phil! You bring up important safety issues that arise as our knowledge of the world advances. AI definitely brings up critical issues that policy makers should grapple with urgently.
But I tend to separate safety issues (even urgent ones) from the scientific process. Science, as a process, is the best method we've developed for understanding our world, while also limiting the effects of self-serving opinion, fallacious reasoning, personal bias and subjective preferences.
Seen that way, I see science as simply a really good way of discovering the truth about our world, and I wouldn't want to see politicians interfering in that process. But, as you highlight, there might be very compelling policy reasons for putting safeguards around the communication of, access to, or use of certain types of knowledge.
Such a great question! I'm not sure, I think it might depend on the area of knowledge. I could imagine many would agree that access to some knowledge should be restricted. But access and exploration in other areas might be encouraged. For example, we're seeing great things happening in genomics and precision medicine, that could have a huge positive impact on human well-being. Renewable energy technologies could be another area we want to encourage.
It's complicated for sure. One problem is that while we can divide up these different areas of research conceptually in a neat and tidy manner, in the real world many of these areas feed back upon each other. For example, using AI to do genetics research.
I certainly don't have a precise prescription for what should happen. My primary interest is in challenging and undermining the simplistic, outdated and increasingly dangerous philosophy that more knowledge is automatically better.
That outdated 19th century knowledge philosophy is generally taken to be an obvious given, in a manner similar to how Christian culture of 600 years ago had a typically unexamined faith based assumption that the clergy were authoritative in understanding our reality.
I'm sure you're familiar with CRISPR. So, what's going to happen when your next door neighbor can create new life forms in his garage workshop?
As nuclear weapons so simply illustrate, it doesn't matter if most of the emerging technology is very beneficial if any one of the emerging technologies crash the entire system.
Thanks for gracefully enduring this compulsive rant topic I am infected with. :-)
Jeff Bezos has a great quote: “Be wary of proxies”
When you’re measuring a KPI, realize that that KPI is a proxy of an outcome you want to produce or measure.
Make sure you know what you’re measuring or want to produce. Often times we lose sight of that and just focus on the numbers vs the anecdotal and other evidence that may suggest we aren’t achieving our goals
Oh! I like that -- 'be wary of proxies'. This seems so true in terms of knowledge generation -- producing more but understanding less, is not progress.
I like how you walk us through a research process and discuss AI pitfalls.
Very helpful for my work in AI x Writing Curriculum.
I have been thinking about Kahneman a lot this week.
Gen AI with a human in the loop cannot do System 2 reasoning—thus no creativity, new insights, understanding, etc.
Perhaps if we join Gen AI with some other Deep Learning processes that involve some kind of memory and hierarchical processing, we will push beyond the relative simplicity of sequential processing and thus reasoning. Just some things I am thinking about today…
Ah! Yes! I love that. Kahneman's system 1 and system 2 thinking is a great way to think about what AI can and can't do. Memory, hierarchical processing, and perhaps recurrent processing (with feedback loops) is almost certainly necessary for the types of thinking we find in system 2 -- thinking that involves effort, selective attention, reasoning, and deliberate decision making.
Kahneman was such a wonderful thinker. He had a big influence on my understanding. There's a definite sense of loss with his recent passing.
I'm up to my eyeballs in academic / research / scientific papers every day for work. Those illusions mentioned here: The illusion of explanatory depth, exploratory breadth, and objectivity already exist in decades of human created papers. Let me explain:
AI isn't damaging science, it's really highlighting just how bad these papers have been over the past 20 years specifically. Authors are now getting caught doing what they've always done (copy paste boilerplate) only this time getting caught.
Recently it was shown that the papers supporting California's removal of calculus misrepresented hundreds of citations. The author just said what she wanted to say and added citations no-one actually reviewed.
It's also highlighting the fallacy of the peer review where GPT derived key phrases haven't even been edited out.
This isn't a failure of AI, this is AI highlighting decades of failures by humans where they are now just getting caught for being super sloppy.
I hope that the AI boom will force science to address their legacy of problems that already existed.
Thanks Michael. Yes, I can see your point, AI might help with finding lazy science. Although, I'm not sure that I agree that scientists are just now getting caught being 'sloppy'. As I mentioned under question 4, scientists were calling out other scientists for 'sloppy' work well before the introduction of ChatGPT. Because human nature is often a battle between the want for laziness and the want for status, I suspect we will always have those who think they can get away with 'sloppy' work and those who call them on it.
Excellent article Suzi! This is the need of the hour. Do you think this lack of creative depth of AI also applies to Art and Video Making, areas where AI supposedly shines?
Personally, I think the lack of depth that you mentioned throughout the article does definitely apply even to AI generated art and videos. Although very creative (at first), you can see repetitive patterns in AI art as well and I don't think there is any escape from that.
This is such a good question! And it's has been on my mind a lot lately. Yes, there definitely seems to be repetitive patterns in AI generated art. But are there not also repetitive patterns in human art? We often describe art by its style, like Impressionism or Cubism, and we notice that these styles have similar techniques, colour palettes, and subject matter. I makes me question the meaning of creativity 🤔
Well I think there is pattern repetition in terms of style and then there is pattern repetition with patterns itself. I have noticed for instance that when it comes to abstract art, no matter the style you choose, Dalle 3 seems to have a fondness for circular designs. There are also some preferences in colors especially when it comes to making vibrant designs. Can a human artist regurgitate patterns? Absolutely and that is to be expected. But when AI starts to do the same on a scale that is impossible to achieve by human artists, we may soon be in a very homogenous world lacking creativity and worst yet, lacking the skills to hone the creativity needed to make a change.
I am already finding it difficult to respond to emails without the help of AI and we are in its nascent stage!
Yes, good point. And, combine that with your point about finding it difficult to work without AI because of consumer expectations, it's difficult to see how we will prioritise deep work. (Most days) I hold onto the hope that humans don't normally settle for a homogenous world. I've been wondering if competition among humans, which is often seen as the ugly side to humans, might just be our friend on this one?
Declining Trust, against a background of (eg) hallucination, is of ever increasing concern. AI should not be marking its own homework. Human "peer" review and oversight remain essential.
Welp, there go my dreams of acquiring 17 PhDs in fields I know nothing about by ChatGPT-ing my way into science. Thanks for nothing, Suzi!
Yeah that finding of "As an AI language model" phrases in scientific texts doesn't instill confidence. That's potentially another downside of using AI extensivelly in research: reduced public trust due to (often simply perceived) lack of rigor. In a world where a not-insignificant percentage of people already believe that vaccines cause autism, polticians are lizard people, and Bill Gates is installing 5G chips into us via Covid jabs, anything that casts doubt on the scientific method is bound to do more harm.
Trust us people to take something with as much upside potential as AI and then use it in the laziest possible way.
Only 17 PhDs? Jeez 🤣
Good points! I've been recently thinking a lot about reduced public trust in institutions (not just science). This might be the biggest concern an AI-augmented world faces. It's difficult to see how a society functions without trust in its institutions.
Laziness, is sure to reduce trust. But I wonder how much other forces, like misinformation or flat-out false information, will contribute to a breakdown in trust.
I was going to ease into it with 17 PhDs just to see how it felt. Then I could always bump that up to a more reasonable number like 124 or so.
And yeah for sure, scientific papers using ChatGPT to draft stuff is probably the least of our worries when deep fakes and mass-produced disinformation are a thing.
The eternal optimist in me hopes that the end result will be a more skeptical society (in a healthy way). Just as we learned to question the authenticity of photos during the Photoshop era, we might start doing the same with any potentially AI-generated content.
But general erosion of trust and a sort of apathy is also a likely outcome. Fun times to be living in!
(most days) I have hope too. The fact that we (and many others) are talking about the issues, must be a good sign.
First, to get it off my chest and out of the way, I'm increasingly weary of the "AI is polluting our culture" argument. Leaving that aside for now...
More importantly, why don't we examine the commonly held assumption that more and better science should obviously be our goal? Here are two articles that question that assumption:
https://www.tannytalk.com/p/our-relationship-with-knowledge-part
https://www.tannytalk.com/p/the-logic-failure-at-the-heart-of
An idea that we should seek ever more knowledge ever more quickly assumes, typically with little if any questioning, almost as a matter of religious faith, that human beings can successfully manage ANY amount of knowledge delivered at ANY rate.
We seem to have conveniently forgotten that we are the species with thousands of massive hydrogen bombs aimed down our own throats, an ever present immediate existential threat, that we typically find too boring to bother discussing.
More science without limit is like walking up to a clearly insane homeless person on the street, and handing them a loaded shotgun. What could possibly go wrong, right?
Hi Phil! You bring up important safety issues that arise as our knowledge of the world advances. AI definitely brings up critical issues that policy makers should grapple with urgently.
But I tend to separate safety issues (even urgent ones) from the scientific process. Science, as a process, is the best method we've developed for understanding our world, while also limiting the effects of self-serving opinion, fallacious reasoning, personal bias and subjective preferences.
Seen that way, I see science as simply a really good way of discovering the truth about our world, and I wouldn't want to see politicians interfering in that process. But, as you highlight, there might be very compelling policy reasons for putting safeguards around the communication of, access to, or use of certain types of knowledge.
Hi Suzi, I agree, science is a really good way of discovering the truth about reality, no argument there.
My question is, how much more truth can human beings successfully manage?
Such a great question! I'm not sure, I think it might depend on the area of knowledge. I could imagine many would agree that access to some knowledge should be restricted. But access and exploration in other areas might be encouraged. For example, we're seeing great things happening in genomics and precision medicine, that could have a huge positive impact on human well-being. Renewable energy technologies could be another area we want to encourage.
What do you think?
It's complicated for sure. One problem is that while we can divide up these different areas of research conceptually in a neat and tidy manner, in the real world many of these areas feed back upon each other. For example, using AI to do genetics research.
I certainly don't have a precise prescription for what should happen. My primary interest is in challenging and undermining the simplistic, outdated and increasingly dangerous philosophy that more knowledge is automatically better.
That outdated 19th century knowledge philosophy is generally taken to be an obvious given, in a manner similar to how Christian culture of 600 years ago had a typically unexamined faith based assumption that the clergy were authoritative in understanding our reality.
I'm sure you're familiar with CRISPR. So, what's going to happen when your next door neighbor can create new life forms in his garage workshop?
As nuclear weapons so simply illustrate, it doesn't matter if most of the emerging technology is very beneficial if any one of the emerging technologies crash the entire system.
Thanks for gracefully enduring this compulsive rant topic I am infected with. :-)
Jeff Bezos has a great quote: “Be wary of proxies”
When you’re measuring a KPI, realize that that KPI is a proxy of an outcome you want to produce or measure.
Make sure you know what you’re measuring or want to produce. Often times we lose sight of that and just focus on the numbers vs the anecdotal and other evidence that may suggest we aren’t achieving our goals
Oh! I like that -- 'be wary of proxies'. This seems so true in terms of knowledge generation -- producing more but understanding less, is not progress.
Amazing work, Suzi!!!
Incredibly transferable insights!!!
I like how you walk us through a research process and discuss AI pitfalls.
Very helpful for my work in AI x Writing Curriculum.
I have been thinking about Kahneman a lot this week.
Gen AI with a human in the loop cannot do System 2 reasoning—thus no creativity, new insights, understanding, etc.
Perhaps if we join Gen AI with some other Deep Learning processes that involve some kind of memory and hierarchical processing, we will push beyond the relative simplicity of sequential processing and thus reasoning. Just some things I am thinking about today…
Thank you so much, Nick! I'm glad you enjoyed it.
Ah! Yes! I love that. Kahneman's system 1 and system 2 thinking is a great way to think about what AI can and can't do. Memory, hierarchical processing, and perhaps recurrent processing (with feedback loops) is almost certainly necessary for the types of thinking we find in system 2 -- thinking that involves effort, selective attention, reasoning, and deliberate decision making.
Kahneman was such a wonderful thinker. He had a big influence on my understanding. There's a definite sense of loss with his recent passing.
These are important issues. Worth your time.
Thanks, Matt! 😁
I'm up to my eyeballs in academic / research / scientific papers every day for work. Those illusions mentioned here: The illusion of explanatory depth, exploratory breadth, and objectivity already exist in decades of human created papers. Let me explain:
AI isn't damaging science, it's really highlighting just how bad these papers have been over the past 20 years specifically. Authors are now getting caught doing what they've always done (copy paste boilerplate) only this time getting caught.
Recently it was shown that the papers supporting California's removal of calculus misrepresented hundreds of citations. The author just said what she wanted to say and added citations no-one actually reviewed.
It's also highlighting the fallacy of the peer review where GPT derived key phrases haven't even been edited out.
This isn't a failure of AI, this is AI highlighting decades of failures by humans where they are now just getting caught for being super sloppy.
I hope that the AI boom will force science to address their legacy of problems that already existed.
Thanks Michael. Yes, I can see your point, AI might help with finding lazy science. Although, I'm not sure that I agree that scientists are just now getting caught being 'sloppy'. As I mentioned under question 4, scientists were calling out other scientists for 'sloppy' work well before the introduction of ChatGPT. Because human nature is often a battle between the want for laziness and the want for status, I suspect we will always have those who think they can get away with 'sloppy' work and those who call them on it.
Excellent article Suzi! This is the need of the hour. Do you think this lack of creative depth of AI also applies to Art and Video Making, areas where AI supposedly shines?
Personally, I think the lack of depth that you mentioned throughout the article does definitely apply even to AI generated art and videos. Although very creative (at first), you can see repetitive patterns in AI art as well and I don't think there is any escape from that.
This is such a good question! And it's has been on my mind a lot lately. Yes, there definitely seems to be repetitive patterns in AI generated art. But are there not also repetitive patterns in human art? We often describe art by its style, like Impressionism or Cubism, and we notice that these styles have similar techniques, colour palettes, and subject matter. I makes me question the meaning of creativity 🤔
Well I think there is pattern repetition in terms of style and then there is pattern repetition with patterns itself. I have noticed for instance that when it comes to abstract art, no matter the style you choose, Dalle 3 seems to have a fondness for circular designs. There are also some preferences in colors especially when it comes to making vibrant designs. Can a human artist regurgitate patterns? Absolutely and that is to be expected. But when AI starts to do the same on a scale that is impossible to achieve by human artists, we may soon be in a very homogenous world lacking creativity and worst yet, lacking the skills to hone the creativity needed to make a change.
I am already finding it difficult to respond to emails without the help of AI and we are in its nascent stage!
Yes, good point. And, combine that with your point about finding it difficult to work without AI because of consumer expectations, it's difficult to see how we will prioritise deep work. (Most days) I hold onto the hope that humans don't normally settle for a homogenous world. I've been wondering if competition among humans, which is often seen as the ugly side to humans, might just be our friend on this one?
Declining Trust, against a background of (eg) hallucination, is of ever increasing concern. AI should not be marking its own homework. Human "peer" review and oversight remain essential.
Yes! Excellent point.