Connect with us

Artificial Intelligence

Australian Group Chronicles the Growing Realism of ‘Deep Fakes,’ and Their Geopolitical Risk

Published

on

Image by Chetraruc used with permission

May 5, 2020 – A new report from the Australian Strategic Policy Institute and the International Cyber Policy Centre detailed the state of rapidly developing “deep fake” technology and its potential to produce propaganda and misleading imagery more easily than ever.

The report, by Australian National University’s Senior Advisor for Public Policy Katherine Mansted and Researcher Hannah Smith, explained the costs of artificial intelligence technology allowing users to falsify or misrepresent existing media, as well as to generate new media entirely.

While audio-visual “cheap fakes” (edited media using tools other than AI) are not a recent phenomenon, the rapid rise of artificial-intelligence-powered technology has seen several means by which nefarious actors can produce misleading material at a staggering pace, four of which were highlighted by the ASPI report.

First, the face swapping method maps the face of one person and superimposes it onto the head of another.

The re-enactment method allows a deep fake creator to use facial tracking to manipulate the facial movements of their desired target. Another method, known as lip-syncing, combines re-enactment with phony audio generation to make it appear as though speakers are saying things they never did.

Finally, motion transfer technology allows the body movements of one person to control those of another.

An example of face swapping. Source: “Bill Hader impersonates Arnold Schwarzenegger [DeepFake]” Video

This technology creates disastrous possibilities, the report said. When using various deep fake methods in conjunction, one can make it appear as though critical political figures are performing offensive or criminal acts or announcing forthcoming military action in hostile countries.

If deployed in a high-pressure situation where the prompt authentication of such media is not possible, real-life retaliation could occur.

The technology has already caused harm outside of the political arena.

The vast majority of deep fake technology is used on internet forums like Reddit to superimpose the faces of non-consenting peoples such as celebrities onto the bodies of men and women in pornographic videos, the report said.

Visual deep fakes are not perfect, and those available to the layman are often recognizable. But the technology has developed rapidly since 2017, and programs that work to make the deep fakes undetectable have as well.

Generative adversarial networks compete with other AI networks to develop and detect deep fakes, checking and refining hundreds or thousands of times, until deep fake audio and visual media are unrecognizable to the network and far less to the human eye. “GAN models are now widely accessible,” the report said, “and many are available for free online.”

Video tweeted from a nameless, faceless account that appears to show House Speaker Nancy Pelosi inebriated, but was merely slowed and pitch-corrected.

Such forged videos are already widespread and may already have had an impact on public trust in elected officials and others, although such a phenomenon is difficult to quantify.

The report also detailed multiple instances in which a purposely altered video circulated online and potentially misinformed viewers, including a cheap fake video that was slowed and pitch-corrected to make House Speaker Nancy Pelosi appear inebriated.

Another video mentioned in the report, generated by AI thinktank Future Advocacy during the 2019 UK general election, used voice generation and lip-sync to make it appear as though now-Prime Minister Boris Johnson and then-opponent Jeremy Corbin were endorsing each other for the office.

Such videos can have a devastating effect on public trust, wrote Mansted and Smith. And in addition to the fact that the production of such videos is more accessible than ever, deep fake creators can use bots to swarm public internet forums and comment sections with commentary that, because of the lack of a visual element, can be almost impossible to recognize as artificial.

Apps like botnet exemplify the problem of deep fake bots. Users make an account, post to it, and are quickly flooded with artificial comments. This technology is frequently used on online forums, and can be impossible to discern from legitimate comments.

The accelerated production of such materials can make it feel as though the future of media is one where almost no video can be trusted to be authentic, and the report admitted that “On balance, detectors are losing the ‘arms race’ with creators of sophisticated deep fakes.”

However, Mansted and Smith concluded with several suggestions for combating the rise of ill-intentioned deep fakes.

Firstly, the report proposed that international governments and online forums should “fund research into the further development and deployment of detection technologies” as well as “require digital platforms to deploy detection tools, especially to identify and label content generated through deep fake processes.”

Secondly, the report suggested that media and individuals should stop accepting audio-visual media at face value, adding that “Public awareness campaigns… will be needed to encourage users to critically engage with online content.”

Such a change of perception will be difficult, however, as the spread of this imagery is largely based on emotion and not critical thinking.

Lastly, the report suggested the implementation of authentication standards such as encryption and blockchain technology.

“An alternative to detecting all false content is to signal the authenticity of all legitimate content,” Mansted and Smith wrote. “Over time, it’s likely that certification systems for digital content will become more sophisticated, in part mitigating the risk of weaponised deep fakes.”

 

Continue Reading
Click to comment

Leave a Reply

Artificial Intelligence

Sam Altman to Rejoin OpenAI, Tech CEOs Subpoenaed, EFF Warns About Malware

Altman was brought back to OpenAI only days after being fired.

Published

on

Photo of Snap CEO Evan Spiegel, taken 2019, permission.

November 22, 2023 – OpenAI announced in an X post early Wednesday morning that Sam Altman will be re-joining the company that built ChatGPT as CEO after he was fired on Friday. 

Altman confirmed his intention to rejoin OpenAI in an X post Wednesday morning, saying that he was looking forward to returning to OpenAI with support from the new board.

Former company president Greg Brockman also said Wednesday he will return to the AI company.

Altman and Brockman will join with a newly formed board, which includes former Salesforce co-CEO Bret Taylor as the chair, former US Treasury Secretary Larry Summers, and Quora CEO Adam D’Angelo, who previously held a position on the OpenAI board.

Satya Nadella, the CEO of OpenAI backer Microsoft, echoed support for both Brockman and Altman rejoining OpenAI, adding that he is looking forward to continuing building a relationship with the OpenAI team in order to best deliver AI services to customers. 

OpenAI received backlash from several hundred employees who threatened to leave and join Microsoft under Altman and Brockman unless the current board of directors agreed to resign.  

Tech CEOs subpoenaed to attend hearing

Sens. Dick Durbin, D-Illinois, and Lindsey Graham, R-South Carolina, announced Monday that tech giants Snap, Discord and X have been issued subpoenas for their appearance at the Senate Judiciary Committee on December 6 in relation to concerns over child sexual exploitation online. 

Snap CEO Evan Spiegel, X CEO Linda Yaccarino and Discord CEO Jason Citron have been asked to address how or if they’ve worked to confront that issue. 

Durbin said in a press release that the committee “promised Big Tech that they’d have their chance to explain their failures to protect kids. Now’s that chance. Hearing from the CEOs of some of the world’s largest social media companies will help inform the Committee’s efforts to address the crisis of online child sexual exploitation.” 

Durbin noted in a press release that both X and Discord refused to initially accept subpoenas, which required the US Marshal Service to personally deliver those respective documents. 

The committee is looking to have Meta CEO Mark Zuckerberg and TikTok CEO Shou Zi Chew testify as well but have not received confirmation regarding their attendance.  

Several bipartisan bills have been brought forth to address that kind of exploitation, including the Earn It Act, proposed by Sens. Richard Blumenthal, D-Connecticut, and Graham, which holds them liable under child sexual abuse material laws. 

EFF urging FTC to sanction sellers of malware-containing devices

The Electronic Frontier Foundation, a non-profit digital rights group, have asked the Federal Trade Commission in a letter on November 14 to sanction resellers like Amazon and AliExpress following allegations mobile devices and Android TV boxes purchased from their stores contain malware.

The letter explained that once the devices were turned on and connected to the internet,  they would begin “communicating with botnet command and control (C2) servers. From there, these devices connect to a vast click-fraud network which a report by HUMAN Security recently dubbed BADBOX.”

The EFF added that this malware is often operating unbeknownst to the consumer, and without advanced technical knowledge, there is nothing they can do to remedy it themselves.

“These devices put buyers at risk not only by the click-fraud they routinely take part in, but also the fact that they facilitate using the buyers’ internet connections as proxies for the malware manufacturers or those they sell access to,” explained the letter. 

EFF said that the devices containing malware included ones manufactured by Chinese companies AllWinner and RockChip, who have been reported on for sending out products with malware before by EFF.

Continue Reading

Artificial Intelligence

Sam Altman to Join Microsoft, New FCC Broadband Map, Providers Form 4.9 GHz Coalition

After being fired on Friday by the board of OpenAI, former CEO Altman will join Microsoft and lead its AI.

Published

on

Photo of Sam Altman, taken 2017 permission.

November 20, 2023 – Microsoft CEO Satya Nadella announced in an X post Monday that former OpenAI CEO Sam Altman will be joining Microsoft after being fired from the machine learning company. 

Over the course of the last four days, OpenAI has undergone several shifts in leadership, which includes OpenAI investor Microsoft acquiring OpenAI president and chairman Greg Brockman to lead an AI research team alongside Altman

Brockman, who had been concurrently relieved from his role as chairman of the OpenAI board, announced his resignation Friday via X, upon learning that the board had decided to fire Altman. 

OpenAI said in a blog post Friday that Altman “was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.”

OpenAI then notified The Information Saturday that Emmett Shear, co-founder of streaming site Twitch, would serve as CEO after having CTO Mira Murati serve that role in the interim.  

Following Nadella’s announcement Monday morning, nearly 500 of the 700 OpenAI employees were signatories to a letter threatening to leave their roles to work under Altman and Brockman at Microsoft unless all of the current board members resign. 

As of Monday, OpenAI board member Ilya Sutskever posted a message of regret on X regarding the board decision to remove Altman and Brockman. The phrase “OpenAI is nothing without its people,” is now emerging from employee’s X accounts.  

FCC announces new national broadband map

The head of the Federal Communication Commission announced Friday the third iteration of its national broadband map, showing just over 7.2 million locations lack access to high-speed internet. 

That is less than the 8.3 million identified in May.   

FCC Chairwoman Jessica Rosenworcel noted that map data continue to fluctuate less between iterations, showing improvements in map accuracy. 

Previous iterations of the national broadband map had been criticized for not accurately depicting areas with and without service, with widespread concern that that would impact the allocation of Broadband Equity, Access and Deployment funding. 

The map outlines where adequate broadband service is and is not available throughout the nation and provides viewers with information on the providers who service those areas and the technology used to do so. 

Providers form spectrum advocacy coalition 

A group of telecom industry players including Verizon and T-Mobile announced Thursday the formation of the Coalition for Emergency Response and Critical Infrastructure to advocate for select use of the 4.9 GigaHertz (GHz) spectrum band. 

The coalition is in support of prioritizing state and local public safety agencies as main users of the 4.9 GHz band, while ensuring that non-public safety licensees operate on the band to avoid interference. 

“Public Safety agencies have vastly different needs from jurisdiction to jurisdiction, and they should decide what compatible non-public-safety use means within their jurisdictions,” read the coalition’s letter.  

In January of this year, the FCC adopted a report to manage the use of the 4.9 GHz band, while seeking comment on the role a band manager would play in facilitating license allocation between public safety and non-public safety entities. 

It had proposed two methods of operation for the band manager in which it would either lease access rights from public-safety entities and then sublease that to non-public safety entities, or to facilitate direct subleasing between public safety operators and external parties. 

In its letter to the FCC, the coalition announced support for the second of those methods stressing the fact that it will allow public safety license holders retain authority over who they sublease their spectrum to. 

Continue Reading

Artificial Intelligence

FCC Cybersecurity Pilot Program, YouTube AI Regulations, Infrastructure Act Anniversary

The FCC has proposed a pilot program to help schools and libraries protect against cyberattacks.

Published

on

Photo of fourth grade computer lab, taken 2009, permission.

November 15, 2023 – The Federal Communications Commission proposed Monday a cybersecurity pilot program for schools and libraries, which would require a three-year $200 million investment in ways to best protect K-12 students from cyberattacks. 

In addition to going in and assessing what kind of cybersecurity services are best suited for students and school needs, the program would also subsidize the cost of those services used in schools.  

The program would serve as a separate Universal Service Fund program, separate from the existing school internet subsidy program called E-Rate. 

“This pilot program is an important pathway for hardening our defenses against sophisticated cyberattacks on schools and ransomware attacks that harm our students and get in the way of their learning,” said FCC Chairwoman Jessica Rosenworcel.

The proposal would be a part of the larger Learn Without Limit’s initiative, which supports internet connectivity in schools to help reduce the homework gap by enabling kids’ digital access to digital learning.

YouTube rolling out AI content regulations 

Alphabet’s video sharing platform YouTube announced in a blog post Tuesday it will be rolling out AI guidelines over the next few months, which will inform viewers about when they are interacting with “synthetic” or AI-generated content. 

The rules will require creators to identify if the video is made of AI content. Creators who don’t disclose that information could see their work flagged and removed, and they may be suspended from the platform or subject to other penalties.

For the viewer, tags will appear in the description panel on videos indicating that if the video is synthetic or AI generated. YouTube noted that for videos dealing with more sensitive topics, it may use more prominent labels. 

YouTube’s AI guidelines come at a time when members of Congress and industry leaders are calling for increased effort toward AI regulatory reform, and after President Joe Biden’s executive order on AI guidelines signed into effect in October.

Two-year anniversary of the infrastructure investment jobs act 

Thursday marked the second-year anniversary of the Infrastructure, Investment and Jobs Act, which prompted a $400-billion investment into the US economy. 

The IIJA pushed for a variety of programs and initiatives, with over 40,000 sector-specific projects having received funding – several of those working to improve the broadband sector. 

$65 billion was invested by the IIJA into improving connectivity, which helped to establish the $14-billion Affordable Connectivity Program, which has so-far helped more than 20 million US households get affordable internet through a $30 and $75 subsidy per month. 

Outside of ACP, the IIJA called on the National Telecommunications and Information Administration to develop the Broadband Equity Access Deployment program, a $42.5-billion investment into high-speed broadband deployment across all 50 states. 

Currently, states are in the process of submitting their BEAD draft proposals, which all outline how states will administer the funding they receive as well as any funding they already have or how they will use broadband mapping data. 

Continue Reading

Signup for Broadband Breakfast News



Broadband Breakfast Research Partner

Trending