[ad_1]
The opinions expressed by Entrepreneur contributors are their own.
Since the advent of generative AI (or “GenAI”) earlier this year, the future of human productivity has become even more uncertain. Expectations are growing that tools like ChatGPT, Midjourney, and Bard will soon replace human output.
As with most disruptive technologies, our reactions to it span the extremes of hope and fear. On the hopeful side, GenAI is being touted as a “revolutionary creative tool” that venture guru Marc Andreeson believes will one day “save the world.” Some have warned that it could spell the “end” of originality, democracy, and even civilization itself.
But that’s not all GenAI can do. In practice, it operates within a larger context of laws, financial factors, and cultural realities.
And already this picture gives us at least four good reasons why AI will not eliminate humans any time soon.
Related: The biggest fears and dangers of generative AI — what to do about them
1. GenAI output may not be proprietary
The U.S. Copyright Office recently determined that works created by GenAI are not copyrightable.
If the work product is hybrid, only the parts added by humans are protected.
Entering multiple prompts is not enough. The work created by Midjourney was refused registration even though he completed 624 prompts for its creation. This was later confirmed in DC District Court.
Similar challenges exist in patenting inventions created by AI.
The market is a legally restricted game. Requires investment risk, controlled distribution, and marketing budget allocation. Without rights, it will collapse.
And while some countries may have limited rights to GenAI products, human contributions are still needed to ensure strong rights globally.
2. GenAI reliability remains spotty
In a world already saturated with information, trust is more important than ever. And so far, GenAI’s reliability has been very shaky.
For example, an appellate lawyer was recently in the news for using ChatGPT to create a docket. The examples cited by the magazine were as follows: invented, led to penalties against lawyers. This bizarre flaw has already caused legal repercussions, with a federal judge in Texas recently requiring lawyers to prove they are not using unchecked AI in their filings, and other jurisdictions Now, the use of AI needs to be disclosed.
Reliability issues are also emerging in STEM fields. Researchers at Stanford University and the University of Berkeley discovered that GPT-4’s ability to generate code mysteriously worsened over time. Another study found that people’s ability to identify prime numbers dropped from 97.5% in March to an astonishingly low 2.4% just three months later.
Should humans facing real-life crises, whether these are temporary twists or permanent fluctuations, blindly trust AI without having human experts scrutinize the results? • At this point, it would be imprudent, if not reckless, to do so.? In addition, regulators and insurers I need AI output must be scrutinized by humans, regardless of what individuals tolerate.
In this day and age, the ability to generate information that “looks” legitimate is not worth much. Information becomes increasingly valuable because of its reliability. And human scrutiny is still required to ensure this.
3. LLMs are data myopic
More generally, there may be deeper factors that limit the quality of insights produced by large-scale language models (LLMs). The LLM was not trained on some of the richest and highest quality databases we produce as a species.
These include not only personal information, but also information created by public and private companies, governments, hospitals, and professional companies, all of which are not authorized for use.
And while we focus on the digital world, we often forget that there is a large amount of information that is never transcribed or digitized, such as communication that only takes place orally.
These missing pieces of the information puzzle inevitably lead to knowledge gaps that cannot be easily filled.
And if a recent copyright lawsuit filed by actress Sarah Silverman and others is successful, LLM could soon lose access to copyrighted content as a data set.The extent of information available to them is actually shrink before it spreads.
Of course, the database used by LLM will continue to grow and the AI inference will continue to improve. But these banned databases will also grow in parallel, turning this “information myopia” problem into a permanent feature rather than a bug.
Related article: What AI can never do
4. AI doesn’t decide what’s valuable.
GenAI’s ultimate limitations may be the most obvious. That means it’s not human at all.
We focus on the supply side, what generative AI can and cannot do, but who actually decides the final value of the output?
It is not a computer program that objectively evaluates the complexity of a work, but a fickle, emotional, and biased human being. The demand side is still “all too human” with many quirks and nuances.
We may never be able to treat AI art in the same way as human art, based on the artist’s actual experience and interpretation. Cultural and political changes may not be fully captured by algorithms. Human interpretation of this broader context may be necessary to transform our perceived reality into final inputs and outputs and deploy them into the realm of human activity. After all, it remains at the final stage.
What does GPT-4 itself think about this?
Generate content based on patterns in trained data. This means that while we can combine and reuse existing knowledge in novel ways, we cannot truly create or introduce anything completely new or unprecedented. . Human creators, on the other hand, often produce breakthrough works that reshape entire fields or introduce entirely new perspectives. This kind of originality often comes from outside the framework of existing knowledge, and I can’t make that leap. The final use will still be decided by humans, giving them an unfair advantage over his AI tools with greater computational power.
Therefore, humans are always in 100% control on the demand side, giving the best creators an edge, the power to intuitively understand human reality.
The value of what AI creates is always constrained by the demand side. The more “smarter” GenAI becomes (or the more “stupid” humans become), the bigger this problem will actually become.
Related: In the age of artificial intelligence, there’s always room for human intelligence
These limitations do not lower GenAI’s ceiling as an innovative tool. They simply point to a future in which we humans will always be centrally involved in all important aspects of culture and information production.
The key to unlocking our own potential may lie in understanding exactly where AI can provide unprecedented benefits and where uniquely human contributions can be made.
Therefore, our AI future will be hybrid. As computer scientist Pedro Domingos, master algorithm “Data and intuition are like a horse and a rider, you ride the horse instead of trying to outrun it. It’s not man versus machine, it’s man with machine versus man without machine. ” he wrote.
[ad_2]
Source link