[ad_1]
When generative AI tools spew misinformation, break copyright laws, or perpetuate hateful stereotypes, the people using the technology are to blame.
After all, the large-scale language models (LLMs) that generate text and images “don’t use their own brains” and don’t understand the meaning of what they produce, says Searce’s vice president of applied AI at cloud consulting firm Searce. said Paul Pallath, President. Founded in 2004, it provides AI services such as assessing AI “maturity” or readiness and identifying use cases.
“We are a long way from a time when machines will do everything for us,” says Paras. Before joining Search last year, he held executive positions in data science and analytics at SAP, Intuit, Vodafone, and Levi Strauss & Company. (He also holds a PhD in machine learning.)
Humans cannot outsource ethical challenges to algorithms and programs. Instead, we need to “foundate empathy” and develop responsible machine learning practices and generative AI applications, Paras said.
For example, Mr. Saas works with clients to move beyond the realm of the abstract. This will help companies guide their implementation of generative AI and establish a framework for ethical and responsible use of AI.
Pallath spoke to AdExchanger about some hypothetical, but highly likely, ethical scenarios marketers could face.
What should marketers do when a generative AI tool generates factually inaccurate or misleading information?
Paul Pallas: Understand, validate, and fill in the gaps in everything that comes out. There is a lot of content that LLMs produce that may seem true but is not. Don’t assume anything. Fact checking is very important.
What if I’m not sure if my LLM is trained on copyrighted material?
Avoid using it unless you have the rights and explicit permission from the copyrighted author, as you will be exposing your company to significant risk.
subscribe
AdExchanger Daily
Get our editor’s roundup delivered to your inbox every weekday.
The LLM must also spit out the references from which its content was generated. All references should be checked. Please go back and read the original content. I’ve seen LLMs create references, but those references don’t exist. They just fabricated information.
Let’s say you’re a marketer looking for advertising images. Suppose the LLM keeps returning people with lighter skin. How can we avoid harmfully reinforcing and amplifying bias?
How you design your prompts matters. Governance should be established around prompt engineering, typically a review of the different types of prompts that should be used. This will ensure that the content published is not biased.
With an approved repository of images, LLM can create different environments, change colors, clothing, brightness, and make images into high-resolution digital images.
If you’re a retail business, you can wear another apparel over a person’s image if you have permission to use it. [of existing images] Therefore, it can become part of your marketing message. You can hire an authorized brand ambassador and don’t have to come into the store to take hours of photos and videos.
Should companies pay these brand-certified ambassadors for AI-generated variations of their image?
yes. You’ll be correcting any digital artifacts you create with different models. Companies will start working on different compensation structures.
Because LLMs train online, they often prefer a “standard” format in a major language such as English. How can marketers reduce language bias?
Although LLM is mature from a translation perspective, variations exist within the same language. Where the content comes from, who vetted it, whether it is true from a cultural perspective, whether it conforms to the belief system of the country, etc. is not knowledge that an LLM has. .
Humans must be involved to rigorously review the generated content before it is published. Have cultural ambassadors in your company who understand the nuances of culture and how it resonates.
Given the power consumption involved in running LLM, is generative AI morally questionable from a sustainability perspective?
Training these models consumes a significant amount of computing power.
The metrics that large companies are chasing to achieve carbon neutrality over the next five to 10 years form the basis of which vendors they choose, so they are not contributing to their carbon emissions. When making these choices, you need to look at the energy your data center uses.
How can I prevent it? exploitationsuch as using prisoner or very low wage workers Are there other bad practices for training LLMs and by LLM manufacturers?
Even before the data actually enters the algorithm, you need data governance and data lineage in terms of who created the data and who touched the data. [a log of] decisions made [along the way]. Data lineage provides transparency and allows algorithms to be audited.
Currently, that auditability does not exist.
Transparency is necessary to eliminate unethical elements. But we rely on the large companies that created these models to publish transparency metrics.
This interview has been edited and condensed.
For more articles about Paul Paras, click here.
[ad_2]
Source link