Tech Archives - TheWrap https://www.thewrap.com/category/tech/ Your trusted source for breaking entertainment news, film reviews, TV updates and Hollywood insights. Stay informed with the latest entertainment headlines and analysis from TheWrap. Fri, 28 Jun 2024 17:21:48 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.5 https://i0.wp.com/www.thewrap.com/wp-content/uploads/2024/05/the_wrap_symbol_black_bkg.png?fit=32%2C32&ssl=1 Tech Archives - TheWrap https://www.thewrap.com/category/tech/ 32 32 AI Employees Should Have a ‘Right To Warn’ About Looming Trouble | Commentary https://www.thewrap.com/ai-employees-should-have-a-right-to-warn-about-looming-trouble-commentary/ https://www.thewrap.com/ai-employees-should-have-a-right-to-warn-about-looming-trouble-commentary/#respond Fri, 28 Jun 2024 17:30:00 +0000 https://www.thewrap.com/?p=7571641 Reasonable rules allowing employees at AI research houses to warn about impending problems are sensible given the technology’s increasing power.

The post AI Employees Should Have a ‘Right To Warn’ About Looming Trouble | Commentary appeared first on TheWrap.

]]>
By now, we’re starting to understand why so many OpenAI safety employees left in recent months. It’s not due to some secret, unsafe breakthrough (so we can put “what did Ilya see?” to rest). Rather, it’s process oriented, stemming from an unease that the company, as it operates today, might overlook future dangers.

After a long period of silence, the quotes are starting to pile in. “Safety culture and processes have taken a back seat to shiny products,” said ex-OpenAI Superallignment co-lead Jan Leike last month. “OpenAI is really excited about building A.G.I., and they are recklessly racing to be the first there,” said Daniel Kokotajlo, an ex-governance team employee, soon afterward. Safety questions are “taking a backseat to releasing the next new shiny product,” added ex-OpenAI researcher William Saunders.

Whether these assertions are correct (OpenAI disputes them), they make clear that AI employees need much better processes to report concerns about the technology to the public. Ordinary whistleblower protections tend to deal with the illegal, not potentially dangerous, and so they fall short of what these employees require to speak freely without fear.

And though today’s cutting edge AI models aren’t a threat to society, there’s currently no good way to flag potentially dangerous future developments — at least for those working within the companies — to third parties. That’s why we’ve almost exclusively heard from those who’ve exited. Right now, the alert system is left to the corporations themselves.

So Saunders, Kokotajlo, and more than a dozen current and ex-OpenAI employees are calling for a “Right to Warn,” where they’d be free to express concerns about potentially-dangerous AI breakthroughs to external monitors. They went public about this in an open letter earlier this month where they called for an end to non-disparagement agreements and for the start of an anonymous process to flag concerns to third parties and regulators. And after speaking with Saunders and Harvard Law Professor Lawrence Lessig, who is representing the group pro-bono, their demands seem sensible to me. 

“Your P(doom) does not have to be extremely high to believe that it makes sense to have a system of warning,” Lessig told me. “You don’t put a fire alarm inside of a school because you really believe the school is going to burn down. It’s just that if the school’s on fire, there ought to be a way to pull an alarm.”

The AI doom narrative has been way overblown lately, but that doesn’t mean the technology is without risk. And while a right to warn might send a signal that the tech is more dangerous than it is, and even do some marketing for OpenAI’s capabilities, it’s worth establishing some new rules to ensure that employees can talk when they see something. Even if it’s not species-threatening. 

The alternative — trusting companies to self-report troubling developments or meaningfully slow product cadence — has never worked. Even for those entities with novel corporate structures built around safety, an employee “Right to Warn” is essential.

My full conversation with Saunders and Lessig will go live next Wednesday on Big Technology Podcast. To get it in your feed, you can subscribe on Apple PodcastsSpotify, or your app of choice.

This article is from Big Technology, a newsletter by Alex Kantrowitz.

The post AI Employees Should Have a ‘Right To Warn’ About Looming Trouble | Commentary appeared first on TheWrap.

]]>
https://www.thewrap.com/ai-employees-should-have-a-right-to-warn-about-looming-trouble-commentary/feed/ 0
The New ‘Cheap Fakes’ and the Coming Presidential Debates | Commentary https://www.thewrap.com/cheap-fakes-presidential-debate-biden-jon-stewart-commentary/ https://www.thewrap.com/cheap-fakes-presidential-debate-biden-jon-stewart-commentary/#comments Wed, 26 Jun 2024 13:15:00 +0000 https://www.thewrap.com/?p=7569652 Prolific deceptive media may be coming soon to a screen near you

The post The New ‘Cheap Fakes’ and the Coming Presidential Debates | Commentary appeared first on TheWrap.

]]>
Caution ahead as we anticipate Thursday night’s first Presidential debate. Unscrupulous actors are ready to pounce on every moment to splice and socialize candidate gaffes, whether they’re real or not. It’s not just about AI-generated “deep fakes” anymore. Now we also need to be aware of so-called “cheap fakes.” 

Just two weeks ago a “cheap fake” widely circulated by The New York Post showed the president at the G7 summit in Italy with the caption, “President Biden appeared to wander off at the G7 summit in Italy, with officials needing to pull him back to focus.”

As TheWrap pointed out in its coverage, many began noting that the video on the Post’s social media was seemingly cropped so as not to show the other group of skydivers that Biden was turning to address. 

Last week, Stewart’s The Daily Show featured the clip showing Biden seemingly waving to absolutely nobody, then being sheepishly escorted back to the main event. Based on that video clip, Stewart, in his trademark way, questioned whether Biden was “all there.” My wife and I – being avid Jon Stewart viewers who appreciate his overall sophistication – looked at each other as we watched and gulped. Sad. 

It was only afterward that we realized it wasn’t true.

The real clip – which had been doctored to meet a certain political agenda – featured Biden waving to real actual parachutists who had just landed. In other words, Biden’s mind was, in fact, very much intact. Not in the air.

Cheap fakes – unlike deep fakes generated using sophisticated AI tech — are real media (images, video) that have been deceptively cropped or edited using simple editing tools to create an impact very different than the real thing.

Partisans created the Biden clip — and its rather diabolical cheap fake magic — undoubtedly on the cheap (hence, the name) with low-tech deceptive cropping. Alteration of real audio tracks are also hallmarks of cheap fakes. All of these can be highly effective to spread disinformation at mass speed and scale, as we saw last week with the Biden video, which had been watched over three million times just two days after it had been posted. 

By the time word got out about the cheap fake shenanigans, much of the damage had already been done. Everyone loves a salacious story after all, even if it isn’t true. That’s especially the case when media companies have lost any urge or inclination to be fair and balanced. 

Cheap fakers attack on Nancy Pelosi

Nancy Pelosi notoriously suffered a similar fate in 2019. That’s when cheap fakers altered a video to slur her speech and make her appear to be intoxicated. The reality, of course, was something entirely different. Bad actors had simply subtly edited the video to meet their nefarious narrative, pressed “post” across all major social media platforms, and BAM! – the video went viral and created a fake narrative entirely its own. Red meat dished out. Red meat successfully served.

Both cheap fakes and deep fakes generate profound implications for media and entertainment as they proliferate at ever accelerating speeds. On the positive legitimate side, studios use deep-fake technology, with licensed consent, to resurrect actors who have been deceased for years, such as the digital recreation of actor Peter Cushing for the film “Rogue One: A Star Wars Story.”

Celebrities are already lining up to scan themselves — Paris Hilton did so several years ago as depicted in the 2018 documentary “The American Meme” — so that their likenesses can stay forever young to act in future productions even when they physically can’t. The business opportunity is expected to be so great that CAA already has set up an AI talent cloning operation called CAA Vault.

On the sadly more pervasive side, our new era of fakes poses the very real serious threat to authenticity and trust in the media. Unable to distinguish real from increasing fakery even by some of the largest media companies on the planet, public trust is undermined — and then all bets are off. 

Peter Cushing Carrie Fisher de-aging
Peter Cushing Carrie Fisher were digitally de-aged for “Rogue One” (Disney)

Dystopian results are not limited to presidents and ex-presidents, of course. Malicious unauthorized cheap and deep fake videos of celebrities (not to mention non-celebrities) are proliferating with increasing frequency with potentially defamatory and even tragic results. Taylor Swift found herself at the center of the deepfake storm earlier this year, and several teens have reportedly committed suicide after being victimized. 

President Biden has essentially no choice but to endure the fakery because he is the most public of public figures in the world. The courts are simply unlikely to intervene (have you seen this Supreme Court lately?). But others like Swift and the parents of teens most certainly can and will sue as this technology proliferates. 

The ironic thing about this tech-enabled mass deception – at least on the deep fake side of the equation — is that it’s now up to those same tech companies that empowered this alternative reality to knock it down. Big Tech claims to be on its way, with virtually every major platform developing one fake-spotting solution or another. Google, for example, has developed SynthID which identifies AI-generated content. And then at least some in government are doing their best to move laws and regulations forward, including by pushing the No AI Fraud Act, which offers protections for human performers. But how likely is that anytime soon, given the gridlock and downright anger in Washington? 

And remember, purveyors of cheap fakes don’t need AI or complex tech to work their artificial alchemy. Simple editing techniques, which are hard to spot, can suffice. So ultimately, and unfortunately, it’s up to all of us to be wary of all content, read the warning signs, check multiple sources, and learn to distinguish what is real from what is fake. 

Consider that as you watch the Presidential Thursday night fights.

Reach out to Peter at peter@creativemedia.biz. For those of you interested in learning more, sign up to his “the brAIn” newsletter, visit his firm Creative Media at creativemedia.biz, and follow him on Threads @pcsathy.

The post The New ‘Cheap Fakes’ and the Coming Presidential Debates | Commentary appeared first on TheWrap.

]]>
https://www.thewrap.com/cheap-fakes-presidential-debate-biden-jon-stewart-commentary/feed/ 3
Cannes Lions: AI Chatter – of Both Fear, Optimism – Replaces Brands Buzz at the 2024 Ad Fest https://www.thewrap.com/cannes-lions-2024-ai-panels-elon-musk/ https://www.thewrap.com/cannes-lions-2024-ai-panels-elon-musk/#respond Fri, 21 Jun 2024 02:08:27 +0000 https://www.thewrap.com/?p=7568038 There is no question that AI has been the “topic du jour” at this year’s international gathering

The post Cannes Lions: AI Chatter – of Both Fear, Optimism – Replaces Brands Buzz at the 2024 Ad Fest appeared first on TheWrap.

]]>
CANNES – There is no question that AI has been the “topic du jour” at this year’s Cannes Lions advertising festival, but with such a plethora of different voices on the Croisette — think everyone from hip-hop stars to tech nerds — it’s hard to know what the takeaway is.

Needless to say, there is both fear and optimism around the rapid development of this revolutionary, industry-shifting technology.

For billionaire entrepreneur and X owner Elon Musk, there’s a 10–20% chance that AI creates a global disaster, as he discussed on stage here. And newspapers are history. Author and guru Deepak Chopra used the fest to present his AI digital twin and the dawn of AI well-being. For Meta execs, a regular walk in the park to generate ideas is enhanced by AI doing the thinking. While for your average Lions’ attendee, the hope is that it could simply mean more holiday time. (Indeed, Musk said on Wednesday, “Why bother doing anything if AI can do it better?”)

Hopes for more free time could be dashed, however. “It was the same when robots were first a thing, and automation. There was fear that they were going to take our jobs,” CJ Bangah, partner at assurance, tax and advisory services company PwC, told TheWrap. “There was optimism that we were only going to have to work X days a week. And now we work more than ever with our phones.”

Feedback at the Lions from PwC partners, Bangah added, was that brands don’t want to have too much information about AI, “but they want to be having AI-plus conversations.” In other words, they know the importance of strategically integrating AI into their business, but goals shouldn’t be all AI all the time.

If we were to boil down the 2024 festival’s most prominent AI takeaway, it is that there is simply no getting away from it. Everyone had something to say about it.

Take Tuesday, when overnight oats were served at the Meta beach pavilion, featuring an AI demo area, while the tech company’s execs chatted about how AI impacts their business.

Meta was followed by a keynote from Musk, who is working on human-AI symbiosis at his company Neuralink. 

There was another official panel by Google that same afternoon, and  dozens more running at other venues concurrently. 

There was additionally talk of Apple Intelligence being a game changer with Apple’s recent announcement it will incorporate AI tools into its phones, and Google and Microsoft both talked up their own AI developments at the Lions. 

Generative AI has “major implications.”

The last year’s faster-than-expected developments in generative AI particularly sparked animated conversation across the fest.

Text to video is not only here, but accelerating fast. Execs at an event from VideoWeek shared online clips between panel breaks on the expanding world of text to video, a major game changer for image makers. 

“People were predicting it was going to be here by 2030–35, but it’s happening now,” Javier Campos, coauthor of “Grow Your Business With AI,” said. “It has major implications for advertising.”

Campos added: “There are a few companies out there working on the latest generation of text to video. Everybody is racing to see how many more minutes of video can be produced, and how realistic the videos can be. The current state of the art is shorter models that allow one to two minutes, but they’re getting better and better very quickly. Now with this, you can have 20 ideas and then pick one. In the short term,  it should help creative people.”

Regulation continues to be a concern.

Excitement and optimism about AI tools did not overshadow underlying concern for regulations that protect artists.

“I  have a positive view on AI. I think it’s in its early infancy, but it will liberate us to think about creativity in a different way, it will democratize access to creative tools,” Nathalie Lethbridge, founder and CEO of Atonik Digital, a boutique streaming advisory firm, told TheWrap. “If you look at the genesis of YouTube, no one thought that user-generated content would be what it has become. The issue is the safeguards around it. The issue is the copyright, the issue is the monetization tools, because with democratization comes a downward spiral in terms of monetization.” 

Andrew Grosso, Pickaxe Foundry CPO and cofounder, saw potential for box office predictions. “I feel like that’s the sexy version that we’re all talking about with ChatGPT and LLM,” he told TheWrap. “But for all of these companies, these content producers, there’s a whole world of machine learning, lowercase AI, things where you can use the power of machines to make smart decisions or get insights at scale.”

He continued: “There are a ton of people in Hollywood who know exactly what an opening weekend is going to do within a range, but they only know about four or five days beforehand. If you put together that kind of machine learning with the analysis and common sense of the industry … you get the machine plus the human and you can actually predict results.”

AI isn’t ready to take common jobs.

Ogilvy’s vice chairman Rory Sutherland approached AI with humor and concern for customer care and quality.

“My concern is that AI customer service is quite bad, rather like self checkout tills at super markets. We’ve got this concern that AI is unbelievably good and gets rid of us. A more immediate concern is that AI is quite bad, but it gets deployed anyway because it saves money,” Sutherland lamented. “My fear is that we get this herd mentality that we must replace all people with AI.”

The executive added that “the entire impetus of the tech category is really job destruction” and that “there are few exceptions.”

The post Cannes Lions: AI Chatter – of Both Fear, Optimism – Replaces Brands Buzz at the 2024 Ad Fest appeared first on TheWrap.

]]>
https://www.thewrap.com/cannes-lions-2024-ai-panels-elon-musk/feed/ 0
Introducing Showrunner, the AI Platform That Lets You Make Your Own TV Show https://www.thewrap.com/showrunner-streaming-platform-makes-tv-shows-using-ai-fable-simulation/ https://www.thewrap.com/showrunner-streaming-platform-makes-tv-shows-using-ai-fable-simulation/#respond Thu, 20 Jun 2024 13:00:00 +0000 https://www.thewrap.com/?p=7567191 Fable Simulation has a slate of 10 original animated shows you can draw from

The post Introducing Showrunner, the AI Platform That Lets You Make Your Own TV Show appeared first on TheWrap.

]]>
First came AI’s Sora, which created video stories from voice prompts. Then came Suno, which wrote songs via AI. Now the latest AI platform to test Hollywood’s nerves is a streaming service where fans can use the technology to create an original episode of their favorite animated television show.

The new service, called Showrunner, first demonstrated its AI model by developing nine “South Park” AI episodes, which it released as an 11-minute compilation on X during the WGA strike last July. The post drew 8 million views, opening the door to meetings with Fox, Netflix, Paramount and Sony.

Fable Studio’s “South Park” AI episode (Photo courtesy of Fable)

“We started talking with [the studios] about how do you actually let fans make episodes of shows using AI,” founder Edward Saatchi told TheWrap. “We’ve been talking with some of them about how revenue share would work. Maybe fans are making episodes of ‘South Park’ and it’s done in a way that nobody knows about until the studio has seen the episodes. And at the same time, we started to make original shows and started to work with users to make new episodes of TV shows.”

Fable Studio, a San Francisco-based production house that has won Primetime Emmy and Peabody awards for its scripted animation content, is behind the new offering, which Saatchi says is targeted at non-technical, non-professional users. 

Saatchi declined to disclose the costs around the platform or how much financing Fable has, but he noted that the initial plan is to offer Showrunner for free later this year, and then introduce a subscription model later on. He envisions a future where Fable could partner with major streamers like Netflix on a paid add-on for subscribers to be able to make their own content using Showrunner’s capabilities.

As part of a formal product launch on May 30 announced on X, Showrunner unveiled a slate of 10 original animated shows, including “Exit Valley,” a satire of Silicon Valley leaders in the AI revolution; “Pixels,” a family comedy of AI-enabled devices living in the fictitious Sim Francisco; “What We Leave Behind,” an anime family drama about two orphans in Sim Francisco; “Ikiru Shinu,” a dark horror anime focused on the survivors of a global calamity trying to rebuild society; and “United Flavors of America,” a cartoon political satire of U.S. politics in 2024.

A “South Park” simulation

Saatchi was a founding member and producer at Oculus Story Studio, the VR animation studio that Meta shut down in 2017. After co-founding Fable in 2018, he went on to launch Fable Simulation, an AI-focused arm of Fable Studio, in 2021.

The Fable team made the “South Park” episodes without the permission of the animated comedy’s creators — solely for research purposes and with no intention of making the ability to create episodes of the show available for public use, the company said.

When asked about potential copyright issues that could arise from making new episodes of existing TV shows, Saatchi said that while you can’t copyright a style of animation, you can copyright characters and stories. 

“It would violate copyright for us to monetize and put a ‘South Park AI’ maker or ‘Simpsons AI’ maker out there without collaboration with the relevant studios and creators,” he said. “As the product opens up, if users are creating IP characters, we’ll do our best to take them all down and remove people who are releasing work that violates copyright.”

“South Park” creators Trey Parker and Matt Stone notably started their own generative AI deepfake company, Deep Voodoo, which was working on creating a full-length movie about Donald Trump deepfaking his image. However, the pair revealed in an interview with the LA Times that the project was ultimately scrapped due to the coronavirus pandemic.

Representatives for Parker and Stone did not immediately return TheWrap’s request for comment about the Showrunner episodes. 

How the unions feel about Showrunner

For Showrunner to establish a partnership with a streamer, it would require giving notice and negotiating the economic terms with the WGA and SAG-AFTRA, and could raise the same concerns from the guilds about unauthorized replication of actors’ voices that they expressed in the strike negotiations last year.

“If we’re talking about creating new episodes of an existing series — even if it’s an animated series — if that series is voiced by actors, and if this technology is going to be used to replicate their voices for the consumer-created episode of that series, then they would have to have full informed consent and compensation paid to the actors whose voices are being replicated,” Duncan Crabtree-Ireland, SAG-AFTRA’s national executive director and chief negotiator, told TheWrap.

“If the idea is let’s create a wholly new concept that doesn’t use any human performance whatsoever, but is created by one of those companies, then there would be a negotiation with the union over the economics of that,” he said.

Saatchi acknowledged the Hollywood unions’ concerns that studios’ use of artificial intelligence could affect their livelihoods. He also indicated that he would be open to working with the guilds, noting that the point of Showrunner is to show that AI is a “powerful storyteller” and “encourage conversation about how that changes entertainment.” But he argued that the Showrunner platform is focused on using AI to assist creatives and the public in making their projects come to life, rather than on replacing them. 

“Silicon Valley is always trying to tell Hollywood that their new miracle device will change everything. The reality is that as the great screenwriter William Goldman wrote ‘Nobody knows anything’,” Saatchi added. “AI for TV shows and movies may be the next big thing or the next web3 / VR and have very little impact on Hollywood. I hope that people don’t overworry because it’s super early and unclear what will happen.” 

Netflix, Paramount and the WGA declined to comment about Showrunner. Representatives for Fox and Sony did not return TheWrap’s request for comment. 

How Showrunner works

Showrunner is powered by Fable’s AI model Show-1, which was first introduced to the public in a research paper last year. Show-1 leverages a combination of large language models (LLMs), custom diffusion models and a “multi-agent simulation for contextualization, story progression and behavioral control,” according to the paper.

In a similar way to how tech companies have trained chat boxes and OpenAI’s Sora, Fable has used “publicly available data” to train Show-1, Saatchi said, meaning it is scraping the internet without seeking permission from individual copyright holders. It’s the same fair-use strategy all the tech companies have used to justify potential copyright infringement. Saatchi told TheWrap that the company is using AI “ethically” and “what matters (as it matters for all art) is whether a work is a copy or derivative or original.”

“We are focused on making original works of art,” he said.

For the purposes of the “South Park” experiment, the stable diffusion model was trained using approximately 1,200 characters and 600 background images from “South Park” and voice cloning technology was used to recreate the characters’ voices. Transcriptions of most “South Park” episodes are part of Open AI’s GPT-4’s training dataset, which eliminated the need for a custom fine-tuned model.

Showrunner (Courtesy of Fable Studio)
Showrunner (Courtesy of Fable Studio)
Showrunner (Courtesy of Fable Studio)
Showrunner (Courtesy of Fable Studio)

Using a user prompt of around 10 to 15 words, Showrunner can make full scenes and episodes of television ranging between two and 16 minutes, complete with AI-selected dialogue, voice editing, shots, characters and story development. Users can then choose to go into the episode or scene and edit the scripts, camera angles and voices to their liking, or even remake the episode. Showrunner is aiming for seasons that range between 10 and 22 episodes. 

While the platform is being teased as the Netflix of AI, Showrunner still has limitations. It is only able to create animated shows, and it’s more suited to procedurals or sitcoms whose episodes reset with different storylines, rather than the multi-episode arcs of a show like “Breaking Bad” or “Game of Thrones.” 

Entering the Alpha

The platform currently has over 50,000 “Alpha” users signed up for its waitlist, who are being let in daily to try out the platform for themselves with the goal of creating a broader roster of shows. 

One of the first Alpha users given access to Showrunner was Dov Friedman, the co-creator of “Hutzpa!” Friedman and his collaborators Brian Gross and Sam Frommer have been developing the show on and off since 2011 but have struggled to get their foot in the door in Hollywood.

“One of the reasons why we did this is we want people who are creative who haven’t been able to get that shot to get that shot,” Saachti said. 

Friedman’s team already has a lengthy pitch deck, filled with an abundance of characters and concept art. While they see Showrunner as an opportunity to develop the show further with an assist from AI, he insisted that using the tool will not replace the human touch.

“We hand drew these characters that are based off of people that we know. These are our stories,” Friedman told TheWrap. “AI is great to create and jumpstart things but there’s still ingenuity and there’s still the creativity of man that’s out there.”

Hutzpa! (Photo courtesy of Fable Studio/Dov Friedman)

Saatchi and Showrunner, Friedman added, are allowing him to experiment with making a show without having to win the financial backing of a big studio.

“We don’t need to have a $150,000 budget per episode with 40 people working on it,” Friedman said. “We think that with Edward’s tools and then using some traditional animation, we’re going to get a show out of this.”

While “Hutzpa!” is being planned as a 10-episode first season with the hope for five seasons in total, “Exit Valley” is targeting 22 episodes in its first season, some of which are being made by Fable and others by platform users. A jury of filmmakers and creatives will select 18 episodes made by users at the end of July that will be considered canon to the first season of “Exit Valley.”

In addition to helping creators, Saatchi said he hopes Showrunner will lead to completely new content at a time when audiences are drowning in sequels. 

“One of the experiences I’ve had is working with people from Pixar and from Dreamworks who felt that they were in a factory because it was so highly specialized, so complicated, that they weren’t able to express much creativity at all,” he said. “I’d rather see weirder stuff by people who are not beholden to making a $100 million movie or show.”

The future of Showrunner

While it’s still early days, Saatchi envisions a future where creators like Friedman and the average fan can get a cut of revenue for “guest episodes” of streamers’ IP that would be made with their permission and potentially picked up to help with subscriber retention. A second step would be the creation of an add-on within the major streamers’ own platforms where subscribers could pay an extra fee to make their own Showunner-aided content.

“Giving the 270 million Netflix subscribers, for $5 extra, the ability to interactively make new shows is pretty attractive and would help them fuel growth,” he said. “And for the streamers in the weaker position, they’re also very interested because they want to jump up a bit and have something unique.”

When it comes to user-generated content that leverages AI, SAG-AFTRA’s Crabtree-Ireland warned that controls need to be put in place to ensure that a person’s voice cannot be used without consent or appropriate compensation. The No Fakes Act, a bipartisan proposal that would protect the voice and visual likeness of individuals from unauthorized recreations from generative AI, is currently pending in the U.S. Senate. 

“The reality is that thus far we have seen user generated content occupies a certain place in the entertainment ecosystem and the availability of AI tools for consumers and the general public is not necessarily going to change that,” Crabtree-Ireland said. “So we need to be cautious about it.”

“But it’s also important that we not let fear become paralyzing because we need to take appropriate action to channel all this technology,” he added. “All of this has to be implemented in a way that’s going to be human-centered or we’re not going to support it.”

The post Introducing Showrunner, the AI Platform That Lets You Make Your Own TV Show appeared first on TheWrap.

]]>
https://www.thewrap.com/showrunner-streaming-platform-makes-tv-shows-using-ai-fable-simulation/feed/ 0
81% of Creatives Think AI Will Benefit Their Work in New Study, but Regulation Remains Majority Concern https://www.thewrap.com/uta-artificial-intelligence-creatives-study-cannes-lions/ https://www.thewrap.com/uta-artificial-intelligence-creatives-study-cannes-lions/#respond Thu, 20 Jun 2024 08:30:00 +0000 https://www.thewrap.com/?p=7567358 UTA introduced "AI Takes Center Stage: The Real-Time Impact of AI in Creative Media & Marketing" at Cannes Lions Thursday

The post 81% of Creatives Think AI Will Benefit Their Work in New Study, but Regulation Remains Majority Concern appeared first on TheWrap.

]]>
CANNES – A new study on the use of artificial intelligence in the arts found that 81% of creatives believe AI will benefit their work, but a majority 71% agreed that regulation of the new technology is a primary concern.

The study, titled “AI Takes Center Stage: The Real-Time Impact of AI in Creative Media & Marketing,” polled 500 creative professionals and was presented Thursday at Cannes Lions by United Talent Agency‘s research and strategy division, UTA IQ.

AI continued to be a major topic at this year’s Cannes Lions advertising fest after it made its debut at the 2023 edition of the ad world’s biggest gathering.

After crunching the numbers to help shed light on Hollywood creatives’ adoption of AI, UTA found the majority of those polled regularly use the technology and have come to think positively about its future, marking a departure from initial concerns surrounding its inception last year. 

The agency’s study showed that entertainment and marketing creatives see it as a way  to “enhance” rather than “replace” human creativity. It noted that the findings challenge the idea that creatives are largely opposed to AI.

“We are at a profound turning point in AI’s adoption in marketing and entertainment, moving from fear and resistance to curiosity, excitement and cautious optimism,” the report stated. 

Joe Kessler, Global Head of UTA IQ, said in a statement that the resistance-to-utilization pipeline between art and technology is nothing new.

“There has always been fear and uncertainty around new technology,” Kessler said. “Painters worried they would be replaced by photography, movies by theater, TV by movies. Instead, new art forms were born. Human creativity has always found a way, and this survey shows today’s creators are continuing that long tradition of molding technology to their advantage.”

The executive added: “From automating rote tasks to helping explore near-infinite variations on an idea, AI can help creators explore the boundaries of what’s possible and maximize our most precious commodity as humans – our time.”

Key takeaways from the “AI Takes Center Stage” study include:

  • 81% think AI will make more things possible in their work.
  • 73% think AI will elevate content.
  • 71% are either excited about AI or ‘AI-curious.’
  • 70% have used AI, and roughly half do so at least weekly.
  • 69% believe AI will be the most impactful technology of their lifetime.
  • The 30% who say they never use AI at work are five times more likely to say they remain doubtful of AI’s value and impact.
  • 88% find tasks easier with the use of AI.
  • 84% are acting on new ideas thanks to the help of AI.
  • 75% say they are creating higher-quality work by using AI.
  • The top three most common uses of AI for work today are generation of ideas (61%), enhancing images and audio (54%) and improving productivity (52%)

The study said that “while advertising and marketing execs were slightly more bullish (~5% margin) than their entertainment counterparts, the findings were largely consistent across both groups.” 

The down side is concern over challenges like protecting creatives’ rights, and 71% of respondents supported calls for increased regulation of AI.

Those surveyed anticipated the growth of AI will also “prompt a pendulum swing back to valuing uniquely human creativity and experiences, such as random inspiration, IRL experiences and nostalgia for analog mediums,” the report said. 

The study surveyed professionals in the United States, Canada, and the United Kingdom that are directly involved in content creation across entertainment, advertising and marketing. 

The post 81% of Creatives Think AI Will Benefit Their Work in New Study, but Regulation Remains Majority Concern appeared first on TheWrap.

]]>
https://www.thewrap.com/uta-artificial-intelligence-creatives-study-cannes-lions/feed/ 0
Apple’s AI Intelligence: Safe, Secure and Ethically Sourced – Or Is It? | Commentary https://www.thewrap.com/apple-intelligence-is-it-safe-ai/ https://www.thewrap.com/apple-intelligence-is-it-safe-ai/#comments Fri, 14 Jun 2024 13:15:00 +0000 https://www.thewrap.com/?p=7564050 Its “next big thing,” which Apple debuted this week to much fanfare, asks us to consider what these marketing slogans really mean

The post Apple’s AI Intelligence: Safe, Secure and Ethically Sourced – Or Is It? | Commentary appeared first on TheWrap.

]]>
Apple’s redefinition of “AI” to mean “Apple Intelligence” was all the rage earlier this week. That’s kind of funny since a big piece of Apple’s announced AI launch strategy is OpenAI and ChatGPT dependent — which means, by default, Microsoft, OpenAI’s biggest investor.

But casting that aside for the moment, Apple CEO Tim Cook, as expected, firmly placed privacy and security at the center of his pitch. That’s a fascinating and extremely narrow needle to thread, since privacy, security and respect for intellectual property go hand in hand with the data AI uses to “do its thing.” 

Apple’s AI strategy comes in two parts. First, Apple – using its own wholly grown AI tech – will enable users to do myriad tasks more productively and efficiently directly on their iPhones, iPads and Macs. None of those tasks – like prioritizing messages and notifications – requires any outside assistance from OpenAI or any other Big Tech generative AI. Apple Intelligence will be opt-in by default, which means that users must agree to make their data available to Apple’s AI either directly on device or by leveraging the power of its own private cloud for more complex tasks. Apple assures its faithful that it will never ever share their personal data. If all of that is true, so far, so good. No privacy or copyright harm, no infringing foul.

But Apple may be doing at least some of the same things for which OpenAI and other Big Tech AI have been rightfully criticized. The company’s Machine Learning Research site states that its foundational AI model trains on both licensed data and “publicly available data collected by its web-crawler, AppleBot.” There are those three words again – “publicly available data.” Typically, that’s code for unlicensed copyrighted works — not to mention personal data — being included in the training data set, which calls into question whether Apple Intelligence is fully “safe” and “ethically sourced.” That more troubling interpretation is bolstered by the fact that Apple says that web publishers “have the option to opt out of the use of their web content for Apple Intelligence training with a data usage control.” 

The notion of “ethically sourced” AI also goes beyond privacy and copyright legalities. It gives rise to larger considerations of respect for individuals, their creative works and their right to commercialize them. That’s particularly pointed for Apple, which – notwithstanding its recent “Crush” video brain freeze when it (literally) pummeled the creative works of humanity down into an iPad – prides itself for safety and for being Big Tech’s home for the creative community.

The second part of Apple’s strategy is also problematic from this “ethically sourced” perspective. This is when users seek generative AI solutions that Apple’s own AI can’t handle and instead hands off the relevant prompt to OpenAI and ChatGPT to do the work when users give permission to do so. Remember, ChatGPT scoops up “publicly available data” which, again, means that third party personal data and unlicensed copyrighted works are included to some extent. 

An Apple spokesperson declined to comment, but the company in its press materials said it takes steps to filter personally identifiable data from publicly available information on the web. Apple has also stated it does not use other users’ private personal data or other user interactions when training models built into Apple Intelligence. 

In any event, all of this properly calls into question Apple’s “white knight” positioning. Let’s take the legal piece first. If Apple’s use of “publicly available data” means what I think it means, then Apple faces the same potentially significant legal liability that OpenAI and other Big Tech players face. It also may be legally liable when it hands off its generative AI work to OpenAI’s ChatGPT even with user consent. Merely because CEO Sam Altman and his Wild West gAIng at OpenAI do the work does not necessarily excuse Apple from legal liability. 

Companies can be secondarily liable for copyright infringing behavior if they are aware of those transgressions but actively enable and encourage them anyway. That’s certainly at least arguably the case with Apple, which is well aware that OpenAI is accused of copyright infringement on a grand scale for training its AI models on unlicensed copyrighted works. That’s what The New York Times case, and many others like it, are all about.

To be clear, the concept of “ethically sourced” AI is nuanced beyond the strictly legal part of the equation. Creator-friendly Adobe found this out the hard way. It launched its standalone Firefly generative AI application last year with great artist-first fanfare, trumpeting the fact that its AI trained only on licensed stock works already in the Adobe family. It was later reported, however, that that wasn’t exactly true. Firefly apparently had, in fact, also trained – at least in some part – on images from visual AI generator Midjourney, a company that now also finds itself embroiled in significant copyright litigation. And with that inconvenient truth, Adobe’s purity was called into question, which is fair when a company makes purity a headline feature.

But Adobe’s transgressions appear to be of a completely different order of magnitude than OpenAI’s wholesale guardrail-less taking, and its ethical intentions seem to be generally honorable. Given the great steps it takes at least on the privacy side of the equation, Apple too seems to land closer to Adobe than to OpenAI and other Big Tech generative AI services. 

That doesn’t make Apple completely innocent though, especially when being “ethically sourced” is front and center in its pitch. The company developed its two-part strategy to serve its over 2.2 billion users, keep them firmly in its walled garden and catch up in the expected multi-trillion dollar AI race. And it built its next big thAIng knowing that its “Apple Intelligence ” solution likely includes at least some third party personal data and unlicensed copyrighted works.

Reach out to Peter at peter@creativemedia.biz. For those of you interested in learning more, sign up to his “the brAIn” newsletter, visit his firm Creative Media at creativemedia.biz, and follow him on Threads @pcsathy.

The post Apple’s AI Intelligence: Safe, Secure and Ethically Sourced – Or Is It? | Commentary appeared first on TheWrap.

]]>
https://www.thewrap.com/apple-intelligence-is-it-safe-ai/feed/ 1
Elon Musk Makes X Likes Private as Part of an ‘Important Change’ https://www.thewrap.com/x-likes-private-elon-musk-twitter/ https://www.thewrap.com/x-likes-private-elon-musk-twitter/#respond Wed, 12 Jun 2024 21:00:36 +0000 https://www.thewrap.com/?p=7563134 "We are making Likes private for everyone to better protect your privacy," the app notifies users

The post Elon Musk Makes X Likes Private as Part of an ‘Important Change’ appeared first on TheWrap.

]]>
X is making your likes private, as of Wednesday.

“Important change: your likes are now private,” Musk said after the announcement was made through the X Engineering account.

Yes, before this, anyone could go to your page and see what you liked on the X platform, unless you were a premium user.

Individuals will still be able to view their own likes – and like counts still appear individual posts – but the general public will no longer be able to access another account’s likes. A notification letting users know about the change popped up on accounts Wednesday as the Likes tab disappeared from being shown on others’ accounts.

“We are making Likes private for everyone to better protect your privacy,” the notification states. “Liking more posts will make your ‘For you’ feed better.”

The update was teased back in May by X Engineering Director Haofei Wang.

“Public likes are incentivizing the wrong behavior,” Wang wrote on his X account. “For example, many people feel discouraged from liking content that might be ‘edgy’ in fear of retaliation from trolls, or to protect their public image. Soon you’ll be able to like without worrying who might see it. Also a reminder that the more posts you like, the better your For you algorithm will become.”

News of the update received a mixed response from users. Many decried “stalking” other people’s likes as a favorite passtime on the app, while others used it to vet the quality of person another user might be.

The post Elon Musk Makes X Likes Private as Part of an ‘Important Change’ appeared first on TheWrap.

]]>
https://www.thewrap.com/x-likes-private-elon-musk-twitter/feed/ 0
Google’s Note To Self: Section 230 Saved You Before With YouTube, But May Not For AI | Commentary https://www.thewrap.com/googles-ai-overviews-section-230-commentary/ https://www.thewrap.com/googles-ai-overviews-section-230-commentary/#respond Wed, 12 Jun 2024 18:00:00 +0000 https://www.thewrap.com/?p=7562943 The search giant’s new “AI Overviews” feature may lead to massive legal liability

The post Google’s Note To Self: Section 230 Saved You Before With YouTube, But May Not For AI | Commentary appeared first on TheWrap.

]]>
Google may have just opened a Pandora’s box with its new “AI Overviews” feature, the company’s new AI generated one paragraph summaries that now appear first in its search results. The reigning champ of search rushed it into the AI market – and fell flat on its face – in an attempt to beat back AI market leader OpenAI/Microsoft and new competitors like highly touted Perplexity. It was yet another in a continuing string of baffling generative AI gaffes (remember its “woke” Gemini image generating launch?) that certainly didn’t build confidence in its brand, which arguably has stifled search innovation for decades. 

Apart from Google’s feature not being ready for prime time – it suggested eating rocks to get your daily protein fix in one highly publicized amusing example – it also potentially exposes the search giant to massive legal liability. Imagine users taking Google AI Overviews’ advice literally and eating those rocks (or taking whatever other actions based on AI Overviews that are plainly wrong, which they reportedly frequently are), because some users most certainly will.

Google itself touted the fact that users no longer would need to scroll through traditional search results when it introduced AI Overviews, literally saying “let Google do the Googling for you.” So if Google tells its users to buy what it’s selling and they understandably let go of the reins, why shouldn’t they be able to go after the search giant for the damage caused by their reliance? That belief would be absolutely reasonable.

To date, Big Tech internet companies have beat back claims of legal liability for content they deliver based on Section 230 of the Communications Decency Act. Congress established that Act to take away the resource-depleting burden of policing third party content, thereby generally immunizing Big Tech from being liable for that content uploaded by users. Section 230 enabled YouTube and other social media monsters to become the juggernauts that they have – frequently much to our peril – since users upload (and consume) endless streams of unchecked content.  

But with AI Overviews, Google no longer can hide under that Section 230 ruse. Now Google — through the AI technology it built — is directly responsible for the specific AI Overview content that tops its results and may, or may not, be accurate. That’s why rock eating users may have strong product liability type claims against the search giant. And that’s just one example. Imagine the endless stream of others coming down the pike, because they almost certainly will.  

Google – which essentially invented generative AI – certainly has the resources to try to correct the course here. But it’s fair to ask whether it’s even possible for AI to always generate a single fully accurate response since generative AI, by its very nature, steals and “reimagines” myriad answers of the past, many of which may be unverified.

As if that’s not bad enough, Google already has been widely criticized by publishers and the creative community in general for essentially stealing their livelihoods with AI Overviews. Users are led to believe that those paragraphs – based on Google’s scraping or wide swaths of publisher content — may be all they need without the need to peruse multiple links and visit any of the third party sites that fueled them. That transformed dynamic creates a new search world order that upends the long-standing one upon which publishers relied as part of the original deal to make their content searchable. Google literally said the quiet part out loud. It essentially conceded its quest for market substitution – not particularly smart while the company finds itself under the antitrust microscope

Google’s potentially limitless liability issue at the artificial hands of AI Overviews hasn’t gotten nearly the attention it deserves. Perhaps this shouldn’t be surprising, because legal liability isn’t nearly as “sexy” as press coverage of Google blackening its eye with yet another flawed product rollout that featured amusing, yet brand bruising, “eat rocks” stories. 

For Google (and all other generative AI companies), perhaps Section 230 – or lack thereof — is the headline story it should take most seriously next to copyright infringement. But judging by my recent conversation with one high level Google AI executive, I don’t think that memo has been widely received. That may be changing fast though. It was just recently reported that the number of search results that feature AI Overviews has dropped from 27% to 11% since its launch.

Google clearly has lost a step (or several) in this generative AI “spAIce race” and finds itself on its heels as it tries to catch up to Microsoft/OpenAI and others. And panicked product proliferation previously has produced pernicious product liability.

This time, Section 230 may not stop the legal liability spigot from flowing.

Reach out to Peter at peter@creativemedia.biz. For those of you interested in learning more, sign up to his “the brAIn” newsletter, visit his firm Creative Media at creativemedia.biz, and follow him on Threads @pcsathy.

The post Google’s Note To Self: Section 230 Saved You Before With YouTube, But May Not For AI | Commentary appeared first on TheWrap.

]]>
https://www.thewrap.com/googles-ai-overviews-section-230-commentary/feed/ 0
Apple Debuts ‘Apple Intelligence’ AI Feature With ChatGPT Integration  https://www.thewrap.com/apple-intelligence-ai-openai-chatgpt/ https://www.thewrap.com/apple-intelligence-ai-openai-chatgpt/#respond Mon, 10 Jun 2024 19:14:41 +0000 https://www.thewrap.com/?p=7561321 The tech giant will offer OpenAI chatbot functions for free, without the need to create an account

The post Apple Debuts ‘Apple Intelligence’ AI Feature With ChatGPT Integration  appeared first on TheWrap.

]]>
Apple unveiled its plans for AI technology Monday, launching “Apple Intelligence,” which includes a partnership with OpenAI to integrate ChatGPT into the tech company’s products. Some key differences from existing generative AI tools that Apple emphasized included doing more processing on computer and phone devices without sending personal information elsewhere, as well as added privacy when information is sent to its servers.

At Apple’s annual Worldwide Developers Conference, the tech giant announced its AI integration plan, adding machine learning intelligence capabilities across the company’s product offerings. The company is also integrating OpenAI’s ChapGPT into Siri and other apps, powered by OpenAI’s GPT-4o and other generative AI models.

ChatGPT integration will be coming to iOS 18, iPadOS 18 and MacOS later this year, the company said. It will offer ChatGPT functions for free, without the need to create an account. However, paid subscribers to ChatGPT will be able to access premium functions within Apple products. 

The function will serve as an extension of Apple Intelligence, with Siri or other apps providing further “expertise” details from ChatGPT. The user will have the ability to decide whether they would like to share data with ChatGPT in order to receive further information. Once permission is granted, Apple will automatically feed that prompt to ChatGPT and provide an answer. 

OpenAI CEO Sam Altman, who was reportedly in the audience at the conference, posted on social media that he is “very happy to be partnering with apple to integrate chatgpt into their devices later this year!”

Apple noted that they intend to expand their AI capabilities further by partnering with more AI firms to integrate their technology into products. 

Additionally, Apple announced a wide range of AI tools and features of their own, which include language detection, image generative capabilities and Siri advancements. 

Apple Intelligence features a text-generator for emails and text messages, allowing for quick replies and better contextual responses. 

Craig Federighi, Apple’s senior vice president of software engineering, stressed that AI tools currently available from other companies “know very little about you and your needs,” which is something Apple Intelligence is setting out to change. 

The post Apple Debuts ‘Apple Intelligence’ AI Feature With ChatGPT Integration  appeared first on TheWrap.

]]>
https://www.thewrap.com/apple-intelligence-ai-openai-chatgpt/feed/ 0
Tim Cook’s AI Moment | Commentary https://www.thewrap.com/tim-cooks-ai-moment-commentary/ https://www.thewrap.com/tim-cooks-ai-moment-commentary/#comments Fri, 07 Jun 2024 17:30:00 +0000 https://www.thewrap.com/?p=7560039 Apple's Worldwide Developers Conference: The true test of a great tech leader is guiding a company through a computing shift.

The post Tim Cook’s AI Moment | Commentary appeared first on TheWrap.

]]>
Shortly after 10 a.m. pacific time on Monday morning, Tim Cook will take the stage at Apple Park in Cupertino for a critical moment in his career. Cook’s lived through much in 12+ years at the helm of Apple — a stare down with the FBI, global political upheaval, several major product releases — but never anything like this. On Monday, he’ll begin to tackle his first major computing shift as CEO.

We still don’t know how long it will take artificial intelligence to change the way we interact with technology — it could take two years, it could take 50 — but we know it’s coming. And as humans and computers become comfortable relating in natural language, some aspects of our current user interfaces will become clunky, and eventually obsolete. Laugh all you want at the failures of the Rabbit R1 or the Humane Pin, but these early efforts to rethink how we relate with our devices were simply the first attempts to figure out what’s next. They won’t be the last.

More ‘AI devices’ will come, more will fail. But eventually something might work. That matters to the $3 trillion iPhone maker, which has driven the last major computing shifts and isn’t interested in missing this one. At Monday’s WWDC, Apple’s flagship developer conference, Cook is expected to make his first big AI announcements and chart a path ahead. And even if generative AI technology is in its infancy, the stakes are high.

“People want to see that Apple recognizes the opportunity, the importance of this moment, and that it wants to put itself in a position to be a player,” Ben Bajarin, CEO and principal analyst at market intelligence firm Creative Strategies, told me. People want to know “that they’re taking this seriously,” he said.

For Cook, it will be a tricky balance to maintain. Rolling out a new technology that isn’t yet disrupting your flagship business while building something new and useful for your customers is hard. But if the massive computing shifts from desktop to mobile to cloud demonstrated anything, it’s that those who hang too tightly to the past’s fundamentals tend to be left behind.

Microsoft, in the Steve Ballmer era, bear-hugged Windows all the way through a lost decade. When Satya Nadella took over, he prioritized cloud computing, even at the expense of Microsoft’s primary businesses, and eventually revitalized the company. Nadella’s cloud pivot put him in position to land OpenAI as a core strategic partner years later. Now, Microsoft is once again the world’s most valuable company. 

Already, reports indicate that Apple will be bold. Beyond simple features like AI upgrades to voice memos, the company is expected to embed AI into its operating system. It may eventually allow you, for instance, to use Siri to crop a picture and email it, or record a meeting and text it, according to Bloomberg. These changes will likely take some time to roll out, but they’d work on the interaction layer, a potential signal that nothing is too precious to change. 

Apple is also expected to make another untraditional move, bringing in OpenAI to build alongside it. Apple doesn’t typically loop third parties into its development process, but it’s expected to build OpenAI’s technology into Siri to improve the beleaguered product. It apparently took some convincing within Apple to get everyone on board, but the company eventually rallied around the idea, and it’s ready to act. The sense of urgency is there.

Monday will be just the start. Apple is reportedly considering how its physical devices could evolve for the AI era. This includes potentially adding cameras to AirPods and even developing a home robot, per Bloomberg. After spending a week with the new Ray-Ban Metas — which let you listen to music, take pictures, and summon an AI bot in a pair of sunglasses — it seems like Apple’s interest here is warranted. These products will likely go mainstream.

Ultimately, all tech leaders are judged by their ability to take advantage of computing shifts. It’s why Steve Jobs is viewed as a legendary CEO after taking Apple from the desktop to mobile era with a device that outpaced rivals and never looked back. This is what makes Monday’s event so consequential. It’s why I’m excited to be heading there in person, to watch it all unfold. It’s Cook’s AI moment, and it promises to be a show.

This article is from Big Technology, a newsletter by Alex Kantrowitz.

The post Tim Cook’s AI Moment | Commentary appeared first on TheWrap.

]]>
https://www.thewrap.com/tim-cooks-ai-moment-commentary/feed/ 1