ABC's of AI https://abcsofai.com Artificial Intelligence the Easy Way Wed, 25 Sep 2024 22:59:55 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://abcsofai.com/wp-content/uploads/2022/11/cropped-Screenshot-2022-11-06-at-3.25.01-PM-32x32.png ABC's of AI https://abcsofai.com 32 32 212776855 OpenAI’s Power Shakeup: CTO Mira Murati Exits as $150 Billion Pivot Looms https://abcsofai.com/2024/09/25/openais-power-shakeup-cto-mira-murati-exits-as-150-billion-pivot-looms/ https://abcsofai.com/2024/09/25/openais-power-shakeup-cto-mira-murati-exits-as-150-billion-pivot-looms/#respond Wed, 25 Sep 2024 22:59:53 +0000 https://abcsofai.com/?p=32220 Read More »OpenAI’s Power Shakeup: CTO Mira Murati Exits as $150 Billion Pivot Looms]]> Murati Steps Down: What Her Exit Means for OpenAI’s Billion-Dollar AI Revolution

“Breaking news from the world of AI: Mira Murati, the Chief Technology Officer of OpenAI, has dropped a bombshell. In a surprise move announced on X, Murati declared her resignation after a distinguished tenure spanning six-and-a-half years at the company. Known for steering OpenAI through key advancements like DALL-E and Codex, Murati cited a personal need for exploration as her reason for stepping down.

Her departure couldn’t come at a more pivotal moment for OpenAI, which is amidst a seismic shift towards becoming a for-profit entity, potentially valuing the company at a staggering $150 billion. This transition marks a strategic pivot, granting CEO Sam Altman equity and signaling a new era for the AI giant.

Murati’s exit follows a string of high-profile departures, including former Chief Scientist Ilya Sutskever and President Greg Brockman, raising eyebrows about the organization’s stability and future trajectory, particularly its pursuit of AGI.

In response to the news, Altman expressed gratitude for Murati’s contributions, hinting at forthcoming leadership changes amidst these turbulent times. As OpenAI braces for these transitions, questions loom about who will fill Murati’s shoes and how the company will navigate this pivotal juncture.”

]]>
https://abcsofai.com/2024/09/25/openais-power-shakeup-cto-mira-murati-exits-as-150-billion-pivot-looms/feed/ 0 32220
Apple Faces Challenges in China as Huawei Capitalizes on AI Features https://abcsofai.com/2024/09/12/apple-faces-challenges-in-china-as-huawei-capitalizes-on-ai-features/ https://abcsofai.com/2024/09/12/apple-faces-challenges-in-china-as-huawei-capitalizes-on-ai-features/#respond Thu, 12 Sep 2024 13:23:16 +0000 https://abcsofai.com/?p=31699 Read More »Apple Faces Challenges in China as Huawei Capitalizes on AI Features]]> The launch of Apple’s iPhone 16 series in China has been met with mixed reactions, as local users discovered that the highly anticipated AI features would not be immediately available in their language until next year[1]. This delay has sparked skepticism about the value proposition of the new iPhones, especially in light of strong competition from domestic rivals like Huawei[1].

Huawei’s AI Advantage

In contrast to Apple’s delayed AI rollout, Huawei’s Mate XT will offer AI-powered features from day one, including:

  • Text summary, translation, and editing functions
  • AI-enhanced image editing capabilities, such as object removal and photo retouching[6][7]

This immediate availability of AI features has contributed to the Mate XT’s strong pre-launch performance, with over four million pre-orders reported[5].

Consumer Reactions

Chinese consumers have expressed frustration with Apple’s delayed AI rollout:

  • “The absence of AI in China is akin to cutting one of Apple’s arms,” one Weibo user commented[1].
  • Another user questioned, “With the biggest selling point unavailable, shouldn’t you charge us half the price?”[1]

These sentiments reflect growing dissatisfaction among Chinese consumers who feel they are not receiving the full value of Apple’s latest innovations[1].

Market Implications

Apple’s AI delay in China could have significant consequences:

  • Market share: Apple’s ranking in China has already dropped from third to sixth place[1].
  • Competitive advantage: The delay gives competitors like Huawei an opportunity to establish themselves as leaders in AI-powered smartphones[1].
  • Regulatory challenges: Apple has yet to announce an AI partner in China, and the country’s regulatory landscape adds complexity to the situation[1].

Huawei’s Resurgence

Huawei has made a strong comeback in the high-end smartphone market:

  • The company launched the Mate 60 Pro with a domestically manufactured chip, defying US sanctions[1].
  • Huawei has become the world’s largest vendor of foldable phones, surpassing Samsung Electronics[1].
  • The new Mate XT, with its unique tri-fold design, has generated significant buzz in the market[2][4].

Looking Ahead

Apple’s AI strategy in China represents a crucial challenge for the company. While its brand still holds appeal, the delayed AI rollout and fierce competition from resurgent local players like Huawei pose serious obstacles[1]. The company’s ability to adapt its AI offerings to local conditions may ultimately determine its future success in this vital market.

Citations:
[1] https://technology.inquirer.net/136988/huawei-mate-xt-is-the-first-ever-tri-folding-smartphone
[2] https://www.cnn.com/2024/09/09/tech/china-huawei-max-xt-launch-intl-hnk/index.html
[3] https://www.nytimes.com/2024/09/10/business/huawei-trifold-iphone.html
[4] https://www.trendforce.com/news/2024/09/12/news-a-comparison-between-apple-iphone-16-and-huawei-mate-xt/
[5] https://www.reuters.com/technology/huaweis-tri-foldable-phone-stirs-chinese-pride-2800-price-tag-panned-2024-09-11/
[6] https://www.huaweicentral.com/huawei-mate-xt-has-ai-object-removal-and-expansion-tools-for-photo-editing/
[7] https://www.reuters.com/technology/huawei-teases-tri-fold-smartphone-raising-competition-with-apple-china-2024-09-10/
[8] https://readwrite.com/huawei-mate-xt-ultimate-design-triple-foldable-phone/

]]>
https://abcsofai.com/2024/09/12/apple-faces-challenges-in-china-as-huawei-capitalizes-on-ai-features/feed/ 0 31699
What California lawmakers did to regulate artificial intelligence – CalMatters https://abcsofai.com/2024/09/07/what-california-lawmakers-did-to-regulate-artificial-intelligence-calmatters/ https://abcsofai.com/2024/09/07/what-california-lawmakers-did-to-regulate-artificial-intelligence-calmatters/#respond Sat, 07 Sep 2024 15:05:05 +0000 https://abcsofai.com/?p=31479 Read More »What California lawmakers did to regulate artificial intelligence – CalMatters]]> source

California legislators just sent Gov. Gavin Newsom more than a dozen bills regulating artificial intelligence, testing for threats to critical infrastructure, curbing the use of algorithms on children, limiting the use of deepfakes, and more.

But people in and around the AI industry say the proposed laws fail to stop some of the most worrisome harms of the technology, like discrimination by businesses and government entities. At the same time, the observers say, whether passed bills get vetoed or signed into law may depend heavily on industry pressure, in particular accusations that the state is regulating itself out of competitiveness in a hot field.

Debates over the bills, and decisions by the governor on whether to sign each of them, are particularly important because California is at the epicenter of AI development, with many legislators making pledges this year to regulate the technology and put the state at the forefront of protecting people from AI around the world.

Without question, Senate Bill 1047 got more attention than any other AI regulation bill this year — and after it passed both chambers of the legislature by wide margins, industry and consumer advocates are closely watching to see whether Newsom signs it into law.

Introduced by San Francisco Democratic Sen. Scott Wiener, the bill addresses huge potential threats posed by AI, requiring developers of advanced AI models to test them for their ability to enable attacks on digital and physical infrastructure and help non-experts make chemical, biological, radioactive, and nuclear weapons. It also protects whistleblowers who want to report such threats from inside tech companies.

But what if the most concerning harms from AI are commonplace rather than apocalyptic? That’s the view of people like Alex Hanna, head of research at Distributed AI Research, a nonprofit organization created by former Google ethical AI researchers based in California. Hanna said 1047 shows how California lawmakers focused too much on existential risk and not enough on preventing specific forms of discrimination. She would much rather lawmakers consider banning the use of facial recognition in criminal investigations since that application of AI has already been shown to lead to racial discrimination. She would also like to see government standards around potentially discriminatory technology adopted by contractors.

“I think 1047 got the most noise for God knows what reason but they’re certainly not leading the world or trying to match what Europe has in this legislation,” she said of California’s legislators.

Bill against AI discrimination is stripped

One bill that did address discriminatory AI was gutted and then shelved this year. Assembly Bill 2930 would have required AI developers perform impact assessments and submit them to the Civil Rights Department and would have made use of discriminatory AI illegal and subject to a $25,000 fine for each violation.

The original bill sought to make use of discriminatory AI illegal in key sectors of the economy including housing, finance, insurance, and health care. But author Rebecca Bauer-Kahan, a San Ramon Democrat, yanked it after the Senate Appropriations Committee limited the bill to assessing AI in employment. That sort of discrimination is already expected to be curbed by rules that the California Civil Rights Department and California Privacy Protection Agency are drafting. Bauer-Kahan told CalMatters she plans to put forward a stronger bill next year, adding, “We have strong anti-discrimination protections but under these systems we need more information.”

Like Wiener’s bill, Bauer-Kahan’s was subject to lobbying by opponents in the tech industry, including Google, Meta, Microsoft and OpenAI, which hired its first lobbyist ever in Sacramento this spring. Unlike Wiener’s bill, it also attracted opposition from nearly 100 companies from a wide range of industries, including Blue Shield of California, dating app company Bumble, biotech company Genentech, and pharmaceutical company Pfizer.

The failure of the AI discrimination bill is one reason there are still “gaping holes” in California’s AI regulation, according to Samantha Gordon, chief program officer at TechEquity, which lobbied in favor of the bill. Gordon, who co-organized a working group on AI with privacy, labor, and human rights groups, believes the state still needs legislation to address “ discrimination, disclosure, transparency, and which use cases deserve a ban because they have demonstrated an ability to harm people.”

Still, Gordon said, the passage of Wiener’s bill marked important progress, as did the passage of  Senate Bill 892, which sets the standards for contracts government agencies sign for AI services. Doing so, author and Chula Vista Democratic Sen. Steve Padilla told CalMatters earlier this year, leverages the government’s buying power to encourage safer and more ethical AI services.

“We have strong anti-discrimination protections but under these systems we need more information.”

assemblymember rebecca bauer-kahan, democrat from san ramon

While some experts criticized Wiener’s bill for what it failed to do, the tech industry has gone after it for what it does. The measure’s testing requirements and associated enforcement mechanisms will kneecap fast-moving tech companies and create a chilling effect on code sharing that inhibits innovation, big tech companies like Google and Meta have said.

Given the industry’s power in California, this criticism is the proverbial elephant in the room, said Joep Meindertsma, CEO of Pause.ai. Pause.ai is a proponent of regulating AI, endorsing Wiener’s bill and even organizing protests at the offices of California-based companies including Meta and OpenAI. So Meindertsma was happy to see so many regulatory bills clear the legislature this year. But he worries they will be undermined by the tension between a desire to regulate AI and a desire to win the race — among not jut companies but entire countries — to have the best AI. Regulators in California and elsewhere, he said, want to have it both ways.

“The market dynamic between countries that are trying to stay ahead of the competition, trying to avoid regulating their companies too much over fear of slowing down while the others keep racing, that dynamic is the issue that I feel is the most toxic in the entire situation,” he said.

Learn more about legislators mentioned in this story.

Bill Dodd

Democrat, State Senate, District 3 (Napa)

Scott Wiener

Democrat, State Senate, District 11 (San Francisco)

Josh Becker

Democrat, State Senate, District 13 (Menlo Park)

Steve Padilla

Democrat, State Senate, District 18 (Chula Vista)

Buffy Wicks

Democrat, State Assembly, District 14 (Oakland)

Rebecca Bauer-Kahan

Democrat, State Assembly, District 16 (San Ramon)

There are already signs that industry pressure could prevail, at least against Wiener’s bill.

Several Democratic members of California’s Congressional delegation have called on Newsom to veto the bill. Former House Speaker Nancy Pelosi, who represents San Francisco, has also come out against it.

In recent weeks, Newsom seems to have leaned into AI, raising questions over how much appetite he has to regulate it. The governor showed great interest in using AI to solve problems in the state of California, signing an agreement with AI powerhouse Nvidia last month, launching an AI for tax advice pilot program in February, and on Thursday introducing an AI solution aimed at connecting homeless people with services. When asked directly about Wiener’s bill in May, Newsom equivocated, saying that lawmakers must strike a balance between responding to calls for regulation and overdoing it.

The sleeper hits of this year’s AI legislation

Some bills that were more targeted — and significantly less publicized — than Wiener’s 1047 did find success in the legislature.

SB 942, would require companies to supply AI detection tools at no charge to the public so they can tell the difference between AI and reality. It was introduced by Democratic Sen. Josh Becker of Menlo Park.

SB 896 by Democratic Sen. Bill Dodd of Napa would force government agencies to assess the risk of using generative AI and disclose when the technology is used.

Other AI bills passed this legislative session are designed to protect children, including one that makes it a crime to create child pornography with generative AI and another that requires the makers of social media apps to turn off algorithmic curation of content to users under age 18 unless they get permission from a parent or guardian. Children would instead by default see a chronological stream of recent posts from accounts they follow. The bill also limits notifications from social media apps during school hours and between midnight and 6 am.

A trio of bills passed last week aim to protect voters from deceptive audio, imagery, and video known as deepfakes. One bill goes after individuals who create or publish deceptive content made with AI and allows a judge to order an injunction requiring them to either take down the content or pay damages. Another bill requires large online platforms such as Facebook to remove or label deepfakes within 72 hours of a user reporting it, while yet another requires political campaigns to disclose use of AI in advertising.

Also on Newsom’s desk are bills that would require creatives to get permission before using the likeness of a dead person and prohibit use of digital replicas in some instances. Both of those bills were supported by the actors union SAG-AFTRA.

Which bills didn’t pass

In lawmaking what fails to pass, like Bauer-Kahan’s anti AI discrimination bill, is often just as important as what advances.

Case in point: AB 3211, which would have required AI makers to label AI-generated content. It sputtered out  despite support from companies including Adobe, Microsoft, and OpenAI. In a statement shared in social media on Tuesday, bill author Democratic Assemblymember Buffy Wicks of Oakland said it’s unfortunate that the California Senate did not take up her bill that “was model policy for the rest of the nation.” She said she plans to reintroduce it next year.

The labeling bill and Bauer-Kahan’s bill are two of three measures flagged as key by European Union officials who advised California lawmakers behind the scenes to adopt AI regulation in line with the EU’s AI Act, which took five years to create and went into effect this spring. Gerard de Graaf, director of the San Francisco EU office, visited the California Legislature to visit with authors of AB 3211, AB 2930, and SB 1047 in pursuit of the goal of aligning regulation between Sacramento and Brussels.

In an interview with CalMatters this spring, de Graaf said those three laws would accomplish the majority of what the AI Act seeks to do. This week, de Graaf had high praise for his California counterparts, saying he thinks state lawmakers did some serious work to pass so many different AI regulation bills, that they’re at the top of their game, and that they succeeded in being a world leader in AI regulation this year.

“This requires a thorough understanding and that’s not present in many legislatures around the world and in that sense California is a leader,” he said. “The fact that California achieved as much as it did in a year is not an insignificant feat and this will presumably continue.”

Despite advising lawmakers about two bills that failed to pass the possibility of Senate Bill 1047 facing a veto, de Graaf said he sees convergence with EU AI policy in the passage of a bill that requires AI developers to disclose information about datasets used to train AI models.

The fact that the bill meant to protect citizens from discriminatory AI didn’t pass is a really disappointing reflection of the power of tech capital in California politics, said UC Irvine School of Law professor Veena Dubal, whose research has dealt with technology and marginalized workers.

“It really feels like our legislature has been captured by tech companies who by their very structure don’t have the interest of the public at the forefront of their own advocacy or decision making, because they’re profit making machines,” she said.

She thinks events of the past legislative session show that California will not be a leader in regulating generative AI because the power of tech companies is too unwieldy, but she does see signs of promise in bills passed to protect kids from AI. She’s encouraged by digital replica bills supported by SAG-AFTRA passed, a reflection of worker strikes in 2022, and that lawmakers made clear that using generative AI to make child pornography and curate content for kids without parental consent should be illegal. What’s more challenging it seems is passing laws that require any degree of accountability. It shouldn’t be debatable whether people deserve protections from civil rights violations, and she wants lawmakers to label other uses of AI unacceptable, like using AI to evaluate people in the workplace.

“The fact that those laws (protecting kids) passed isn’t surprising, and my hope is that their passage paves a way for stopping or banning use of AI or automated decisionmaking in other areas of our lives in which it is clearly already wrecking harm,” she said.

]]>
https://abcsofai.com/2024/09/07/what-california-lawmakers-did-to-regulate-artificial-intelligence-calmatters/feed/ 0 31479
ChatWTO: An Analysis of Generative Artificial Intelligence and International Trade 2024 – World Economic Forum https://abcsofai.com/2024/09/07/chatwto-an-analysis-of-generative-artificial-intelligence-and-international-trade-2024-world-economic-forum/ https://abcsofai.com/2024/09/07/chatwto-an-analysis-of-generative-artificial-intelligence-and-international-trade-2024-world-economic-forum/#respond Sat, 07 Sep 2024 14:45:35 +0000 https://abcsofai.com/?p=31445 Read More »ChatWTO: An Analysis of Generative Artificial Intelligence and International Trade 2024 – World Economic Forum]]> source

Generative artificial intelligence could contribute an estimated $4.4 trillion annually to the global economy, reshaping industries and international trade.

This white paper explores how this emerging technology is transforming global trade by enhancing productivity, streamlining supply chains, and creating new opportunities for cross-border transactions. It examines both the potential benefits and the challenges generative AI poses, such as regulatory hurdles, data privacy concerns, and intellectual property issues. By providing a comprehensive overview of AI’s impact on trade, the report offers actionable insights for policy-makers and businesses on how to best use these advancements while addressing risks.

Key themes include the growth of digital goods trade, AI’s role in improving trade efficiency, and the importance of global collaboration for effective AI governance. This analysis is essential for those looking to understand the future of international trade in an AI-driven world.

https://www3.weforum.org/docs/WEF_An_Analysis_of_Generative_Artificial_Intelligence_and_International_Trade_2024.pdf
]]>
https://abcsofai.com/2024/09/07/chatwto-an-analysis-of-generative-artificial-intelligence-and-international-trade-2024-world-economic-forum/feed/ 0 31445
Why Nvidia triggered a stock market freakout – Vox.com https://abcsofai.com/2024/09/07/why-nvidia-triggered-a-stock-market-freakout-vox-com/ https://abcsofai.com/2024/09/07/why-nvidia-triggered-a-stock-market-freakout-vox-com/#respond Sat, 07 Sep 2024 06:10:14 +0000 https://abcsofai.com/?p=31414 Read More »Why Nvidia triggered a stock market freakout – Vox.com]]> Why Nvidia triggered a stock market freakout  Vox.com
source

Why Nvidia triggered a stock market freakout

What does Nvidia’s massive stock sell-off tell us about the economy?

by Ellen Ioanes

Updated Sep 5, 2024, 10:00 AM CDT

Nvidia Holds Its GTC: Artificial Intelligence Conference

Nvidia CEO Jensen Huang during the Nvidia GTC Artificial Intelligence Conference at SAP Center on March 18, 2024 in San Jose, California. Justin Sullivan/Getty Images

Ellen Ioanes covers breaking and general assignment news as the weekend reporter at Vox. She previously worked at Business Insider covering the military and global conflicts.

Nvidia, the world’s leading AI chip manufacturer, sparked a global stock market downturn Wednesday, with indexes falling in Asia, Europe, and the United States.

After Bloomberg reported on Tuesday that the US Justice Department issued Nvidia a subpoena as part of an antitrust investigation, investors sold $279 billion worth of shares — amounting to 9.5 percent of the company’s stock. On Wednesday, a spokesperson denied that the company had received the subpoena, but said Nvidia is “happy to answer any questions regulators may have about our business.”

Still, the sell-off is bad news for Nvidia, and it renews existing concerns about the strength of the AI sector and the US economy more broadly.

That one company was able to have such an impact on global stock prices is a testament to Nvidia’s size and reach. Nvidia is the third most valuable company in the world. Because of its dominance, its success — or failure — can shift the tech-heavy Nasdaq stock index, where it is listed. And because it’s so entangled with other tech companies, when it falls, so does the stock of its partners, like Taiwan Semiconductor Manufacturing Company, which pulled down markets overseas. In the US, Nvidia pushed sell-offs throughout the entire tech industry. Microsoft, Amazon, and Intel shares were down as of Wednesday afternoon, though Nvidia competitor Advanced Micro Devices saw gains.

“One of the big risks is that you have this market concentration, and all it takes is those names to be volatile, for it to feed through to the entire market,” Justin Onuekwusi, chief investment officer at investment firm St. James’s Place, told Reuters Wednesday.

While Nvidia triggered this week’s stock market slump, there are a few other factors that have investors rattled. Recent concerns about China’s sluggish economy are putting a damper on a wide array of businesses, including an oil industry already struggling with falling prices. Weak manufacturing in the US, along with some higher prices in that sector, are part of the equation as well.

Nvidia’s troubles come amid rising uncertainty about the AI sector

Investors have significant concerns about whether the US tech sector is headed in the right direction. Questions about whether Nvidia is overvalued, and about the wisdom of investing so heavily in AI technology, have dogged the tech sector for months. Analysts from JPMorgan Asset Management and Blackrock cautioned earlier this week that massive spending on AI hasn’t been justified because the technology has limited applications outside the tech sector.

Companies like Microsoft and Meta have ignored that advice, spending as much as 40 percent of their hardware budgets — tens of billions of dollars — on Nvidia products to accelerate their own AI products. But that has investors worried that tech companies are betting too much on a future that may never come. And that if these giant companies have made a wrong bet, they may drag the stock market down with them.

“[Tech companies are] all kind of saying, ‘Look, we’re not going to be on the wrong side of this. We’re going to invest,’” Daniel Newman, CEO of the Futurum Group, a global technology research and advisory firm, told Vox. “But I’m not hearing for what, or where this provides the return. And I think there’s a little bit of hesitation on [Wall Street] — people want to know where that return comes from.”

]]>
https://abcsofai.com/2024/09/07/why-nvidia-triggered-a-stock-market-freakout-vox-com/feed/ 0 31414
The Trillion-Dollar AI Investment: Balancing Skepticism and Optimism https://abcsofai.com/2024/07/15/the-trillion-dollar-ai-investment-balancing-skepticism-and-optimism/ https://abcsofai.com/2024/07/15/the-trillion-dollar-ai-investment-balancing-skepticism-and-optimism/#respond Mon, 15 Jul 2024 17:26:58 +0000 https://abcsofai.com/?p=30193 Read More »The Trillion-Dollar AI Investment: Balancing Skepticism and Optimism]]> This is from the Goldman Sachs report “GEN AI: TOO MUCH SPEND, TOO LITTLE BENEFIT?”

Key Points from the Report:

  1. Investment Scale:
  • Tech giants and other companies are projected to spend over $1 trillion on AI capital expenditures in the coming years, including investments in data centers, chips, AI infrastructure, and power grids[1][2].
  1. Skeptical Views:
  • Daron Acemoglu (MIT): Acemoglu argues that the economic upside from AI over the next decade will be limited, predicting only a ~0.5% increase in productivity and ~1% increase in GDP. He believes that the transformative changes promised by generative AI will not happen quickly and will primarily enhance the efficiency of existing production processes rather than create new ones[1].
  • Jim Covello (Goldman Sachs): Covello is skeptical about AI’s ability to solve complex problems that justify the high costs. He suggests that the expected decline in technology costs may not materialize as anticipated[1].
  1. Optimistic Views:
  • Joseph Briggs, Kash Rangan, and Eric Sheridan (Goldman Sachs): These analysts are more optimistic about AI’s long-term economic potential. They believe that AI will eventually generate significant returns, even though its “killer application” has yet to emerge. They view the current phase as the “picks and shovels” stage, where foundational investments are being made[1][2].
  1. Constraints and Challenges:
  • Chips and Power Shortages: The report also discusses potential constraints on AI growth due to the current semiconductor shortage (analyzed by Toshiya Hari) and a looming power shortage (discussed with Brian Janous from Cloverleaf Infrastructure)[1][2].
  1. Market Implications:
  • Despite the concerns and constraints, the report suggests there is still room for the AI theme to develop, either through delivering on its promise or through prolonged investment cycles typical of technological bubbles[1][2].

Conclusion:

The Goldman Sachs report presents a balanced view on the future of AI investments, highlighting both the skepticism around immediate economic benefits and the optimism for long-term potential. The debate underscores the uncertainty in predicting the exact trajectory of AI’s impact on the economy and the importance of continued scrutiny and strategic investment[1][2][5].

Citations:
[1] https://www.goldmansachs.com/intelligence/pages/gs-research/gen-ai-too-much-spend-too-little-benefit/report.pdf
[2] https://www.goldmansachs.com/intelligence/pages/gen-ai-too-much-spend-too-little-benefit.html
[3] https://www.reddit.com/r/consulting/comments/1e0amhe/goldman_sachs_gen_ai_too_much_spend_too_little/
[4] https://news.ycombinator.com/item?id=40856329
[5] https://www.firstpost.com/tech/goldman-sachs-calls-genai-overhyped-wildly-expensive-warns-investor-of-ai-bubble-popping-soon-13793152.html

]]>
https://abcsofai.com/2024/07/15/the-trillion-dollar-ai-investment-balancing-skepticism-and-optimism/feed/ 0 30193
What happened to the artificial-intelligence revolution? – The Economist https://abcsofai.com/2024/07/07/what-happened-to-the-artificial-intelligence-revolution-the-economist/ https://abcsofai.com/2024/07/07/what-happened-to-the-artificial-intelligence-revolution-the-economist/#respond Sun, 07 Jul 2024 17:42:39 +0000 https://abcsofai.com/?p=30025 Read More »What happened to the artificial-intelligence revolution? – The Economist]]> Try our new AI-powered search

Move to San Francisco and it is hard not to be swept up by mania over artificial intelligence (AI). Advertisements tell you how the tech will revolutionize your workplace. In bars, people speculate about when the world will “get AGI”, or when machines will become more advanced than humans. The five big tech firms—Alphabet, Amazon, Apple, Meta, and Microsoft, all of which have either headquarters or outposts nearby—are investing vast sums. This year they are budgeting an estimated $400bn for capital expenditures, mostly on AI-related hardware, and for research and development.
In the world’s tech capital, it is taken as read that AI will transform the global economy. But for ai to fulfil its potential, firms everywhere need to buy the technology, shape it to their needs and become more productive as a result. Investors have added more than $2trn to the market value of the five big tech firms in the past year—in effect projecting an extra $300bn-400bn in annual revenues according to our rough estimates, about the same as another Apple’s worth of sales. For now, though, the tech titans are miles from such results. Even bullish analysts think Microsoft will make only about $10bn from generative-AI-related sales this year. Beyond America’s West Coast, there is little sign AI is having much of an effect on anything.
This article appeared in the Finance & economics section of the print edition under the headline “A sequence of zeroes”
Discover stories from this section and more in the list of contents
The state is struggling to deal with troubled institutions
Call it the frappuccino effect
Undoing quantitative easing provokes fierce debate
The state is struggling to deal with troubled institutions
Call it the frappuccino effect
Undoing quantitative easing provokes fierce debate
To understand why, consider the Ouroboros theory of financial risk
Lending to a borrower at war entails an additional gamble: that it will win
Don’t hate the new players—or the new game
Published in September 1843 to participate in “a severe contest between intelligence, which presses forward, and an unworthy, timid ignorance obstructing our progress.”
To enhance your experience and ensure our website runs smoothly, we use cookies and similar technologies.
Copyright © The Economist Newspaper Limited 2024. All rights reserved.

source

]]>
https://abcsofai.com/2024/07/07/what-happened-to-the-artificial-intelligence-revolution-the-economist/feed/ 0 30025
My Top Artificial Intelligence (AI) Stock to Buy Now (and It's Not Nvidia) – The Motley Fool https://abcsofai.com/2024/06/23/my-top-artificial-intelligence-ai-stock-to-buy-now-and-its-not-nvidia-the-motley-fool/ https://abcsofai.com/2024/06/23/my-top-artificial-intelligence-ai-stock-to-buy-now-and-its-not-nvidia-the-motley-fool/#respond Sun, 23 Jun 2024 00:27:30 +0000 https://abcsofai.com/?p=29750 Read More »My Top Artificial Intelligence (AI) Stock to Buy Now (and It's Not Nvidia) – The Motley Fool]]> Founded in 1993, The Motley Fool is a financial services company dedicated to making the world smarter, happier, and richer. The Motley Fool reaches millions of people every month through our premium investing solutions, free guidance and market analysis on Fool.com, top-rated podcasts, and non-profit The Motley Fool Foundation.
Founded in 1993, The Motley Fool is a financial services company dedicated to making the world smarter, happier, and richer. The Motley Fool reaches millions of people every month through our premium investing solutions, free guidance and market analysis on Fool.com, top-rated podcasts, and non-profit The Motley Fool Foundation.
You’re reading a free article with opinions that may differ from The Motley Fool’s Premium Investing Services. Become a Motley Fool member today to get instant access to our top analyst recommendations, in-depth research, investing resources, and more. Learn More
Adobe is monetizing AI in the enterprise-software space.
Artificial intelligence (AI) demands increased computing power, which has been a boon for technology infrastructure and semiconductor companies like Nvidia. These companies benefit from the need to run complex AI models no matter where they come from.
Enterprise-software companies like Adobe (ADBE 2.01%) are challenged because they have to prove AI is worth investing in. In other words, users need to like and pay for what Adobe is building. The company’s recent results indicate its strategy is working.
Investors cheered Adobe’s second-quarter fiscal 2024 financial results and updated full-year guidance — sending the stock soaring on Friday. The earnings call was, in many ways, similar to the Q1 call. Only this time, Adobe’s AI investments translated to impeccable results and high margins.
Even after the run-up, Adobe remains an underrated growth stock to buy now. Here’s why.
Image source: Getty Images.
It’s a mistake to get too caught up in a company’s quarterly results. But I think a few years from now, we may look back at this one as a turning point for Adobe.
Document Cloud revenue grew 19% as Adobe added a record $165 million of new Document Cloud annualized-recurring revenue. Digital Experience subscription revenue grew 13% year over year, and Creative Cloud grew revenue 11% on a constant-currency basis. Commenting on its Creative Cloud segment, Adobe management said it experienced “strong renewals as customers migrate to higher-value, higher [average revenue per user] ARPU Creative Cloud plans that include Firefly entitlements.”
Adobe has implemented its generative AI tool, Firefly, across its flagship products. It’s encouraging to see that Firefly is driving customers to spend more money.
Up until now, Adobe’s expenses were outpacing its gross profit. But this quarter, operating income increased at a higher rate than gross profit — boosting margins and indicating the company is improving its profitability and managing costs. Adobe booked a generally accepted accounting principles (GAAP) operating margin of 35.5% in the quarter and a non-GAAP operating margin of 46%. For context, Adobe has averaged a GAAP operating margin in the low 30% range for the last five years.
Commercial subscriptions continue to be a standout for Adobe. But the company is also gaining interest and usage for its Express mobile and Express for Business offerings, which is an all-in-one app that leverages AI to help users create graphics, PDFs, and short-form videos.
Longer term, the key for Adobe will be catering to all customers — commercial, individual, and education — across all categories. A business may be able to justify a higher price tag and experiment with new tools. However, Adobe needs to find a pricing structure for different markets. Monitoring the adoption of an all-in-one tool like Adobe Express will be a good way to gauge interest in generative AI from individual users, so it’s worth following up on future investors’ presentations.
Adobe, a cash cow with recurring revenue, can afford to make long-term investments and buy back stock. Its earnings growth can come from net income and reducing the outstanding share count to boost earnings per share.
Adobe’s updated guidance calls for non-GAAP earnings per share of $18.00 to $18.20 — giving it a price-to-earnings ratio of 29 based on its 2024 target and current stock price of around $525 a share. Adobe spent $2.5 billion on buybacks in the quarter. Last quarter, it announced a $25 billion buyback program that runs through fiscal 2028. That level of buybacks is substantial, considering Adobe has a market cap of $235 billion. It also indicates that Adobe has extra dry powder and that its spending isn’t out of control.
Another advantage of an enterprise-software company like Adobe is that it doesn’t rely on debt to operate the business. Low cost of goods sold and recurring revenue mean that the main costs are operating expenses like sales, marketing, research, and development.
Adobe has more cash and cash equivalents on its balance sheet than long-term debt. And it doesn’t pay a dividend. So, when the company generates outsized gains, you can expect it to reinvest those profits back in the business and accelerate organic growth, make acquisitions, or repurchase stock. The capital-light nature of the business is a key advantage compared to leveraged companies that are pressured to use outsized profits to pay down debt.
Analysts have been direct with Adobe management on the last couple of earnings calls. Adobe was grilled about its lack of profitability and weak guidance in Q1. This quarter, there was a focus on enterprise software monetizing AI and the vulnerability of a user-based subscription model.
Arguably, the most important moment from the earnings call was when CEO Shantanu Narayen responded to an analyst question on AI becoming so strong that it reduces the need for larger user-based marketing teams — in other words, the existential threat of AI generating content on its own, so there is no longer a need for a subscription model based on the number of users. He said: 
If the value of AI doesn’t turn to inference and how people are going to use it, then I would say all of that investment would not really reap the benefit in terms of where people are spending the money. And so we’re always convinced that when you have this kind of disruptive technology, the real benefits come when people use interfaces to do whatever task they want to do quicker, faster, and when it’s embedded into the workflows that they’re accustomed to because then there isn’t an inertia associated with using it. So with that sort of as a broad segment, I am a big believer that generative AI is going to, for all the categories that we’re in, it’s actually going to dramatically expand the market because it’s going to make our products more accessible, more affordable, more productive in terms of what you — what we can do.
Narayen is making the case that chip companies have benefited from AI, but the real impact comes from what generative AI can do to improve software applications. That may be true, but even if AI doesn’t completely replace marketing teams, efficiency improvements could still lead to fewer software licenses. If one user can accomplish the tasks that used to take two or three users, this can lead to higher revenue per subscriber but fewer overall subscribers.
This isn’t an Adobe-specific problem but a concern for all enterprise-software companies that depend on recurring revenue charged by the number of users. Uncertainty regarding whether AI will be a net positive or negative over the long term is one of the biggest question marks impacting the investment thesis.
When building an investment thesis, it’s important to understand the bear case and why the investment may not work out. A couple of years ago, Adobe’s biggest red flag was a lack of growth and innovation. Today, Adobe is returning to growth and has a clear trajectory for monetizing AI, but there’s the risk of too much innovation weakening its business model.
It all comes down to which risk you view as greater. Innovative companies usually win out over the long term, and I think Adobe can adapt its pricing model over time if necessary. So, taking a step back, the investment thesis has gotten much stronger, and the financials look better, too.
Adobe is my top AI stock to buy now because I think the valuation is reasonable, and there’s untold market potential for building AI creative tools. If Adobe can build tools that can handle a larger share of a marketing campaign or content creation for a social media account, the benefits would be so valuable that they could overcome user-volume declines. It’s too early to tell how it will play out, but the risk and potential reward make sense for patient investors.
Daniel Foelber has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Adobe and Nvidia. The Motley Fool has a disclosure policy.
Invest better with The Motley Fool. Get stock recommendations, portfolio guidance, and more from The Motley Fool’s premium services.
Making the world smarter, happier, and richer.
© 1995 – 2024 The Motley Fool. All rights reserved.
Market data powered by Xignite and Polygon.io.

source

]]>
https://abcsofai.com/2024/06/23/my-top-artificial-intelligence-ai-stock-to-buy-now-and-its-not-nvidia-the-motley-fool/feed/ 0 29750
New Jersey releases guidance to support AI ‘moonshot’ in schools – Chalkbeat https://abcsofai.com/2024/06/23/new-jersey-releases-guidance-to-support-ai-moonshot-in-schools-chalkbeat/ https://abcsofai.com/2024/06/23/new-jersey-releases-guidance-to-support-ai-moonshot-in-schools-chalkbeat/#respond Sun, 23 Jun 2024 00:26:56 +0000 https://abcsofai.com/?p=29768 Read More »New Jersey releases guidance to support AI ‘moonshot’ in schools – Chalkbeat]]> Sign up for Chalkbeat Newark’s free newsletter to keep up with the city’s public school system.
As part of Gov. Phil Murphy’s call to create an “artificial intelligence moonshot” in New Jersey, the state’s department of education unveiled a set of resources last week aimed at helping educators understand, implement, and manage artificial intelligence in schools, state education officials said.
The resources range from articles about teaching and learning on artificial intelligence to a webinar that explains the history of the technology and how it is used in education. The materials do not outline strict regulations on how to use AI in education but they are New Jersey’s first guidance for school districts to “responsibly and effectively” integrate AI-powered technology in the classroom, and incorporate tools to facilitate administrative tasks in schools, according to a state department of education press release.
But as the technology gains popularity, education experts continue to note that safety and privacy concerns should remain a top priority as AI expands in schools. Randi Weingarten, president of the American Federation of Teachers, says states should consider protections for AI in classrooms that take into consideration educators and parents.
“We know that school districts can’t just say privacy matters,” Weingarten said. “There has to be a tech translator, there have to be parent information sessions, and there has to be classroom guidance.”
The state’s new artificial intelligence resources come as Newark Public Schools takes steps to incorporate more AI in classrooms and surveillance systems.
Last month, the school board approved a $12 million project to install more than 7,000 AI cameras districtwide this summer. District leaders said the high-tech surveillance system is meant to make schools safer, but security experts warned that such capabilities could result in an invasion of privacy or could potentially misidentify items or students.
The district is also considering the expansion of Khanmigo, an AI program designed for the classroom and meant to tutor students and assist teachers. So far, there is little research on whether tools like Khanmigo are effective but experts have also said school districts should consider the learning goals for their students. .
New Jersey’s resources do not set parameters for student privacy but the department of education created an artificial intelligence webpage that provides an overview of AI and its systems, terms, and concepts, and guidance tailored for school leaders and teachers. The page will be updated regularly to keep up with the “fast-paced” changes to AI, the state said.
The state also released a webinar that introduces the fundamentals of AI technology and explains how the technology can support and enhance teaching and learning and provide personalized feedback to students depending on the type of technology. AI systems that use machine learning, such as facial recognition software or email spam filters, employ algorithms to make decisions based on data, while systems like chatbots use deep learning to identify complex patterns and relationships in data, the webinar explained.
The state’s webinar also prompts school districts to think about how new technology can support student learning and suggests that districts should review policies as AI evolves and integrates into learning. It also encourages school leaders to think through guidelines for acceptable and unacceptable uses of AI and discuss how the new tools are best implemented.
AFT President Weingarten says “there is tremendous potential for AI use in schools” but school districts and their tech departments should review programs and materials before allowing students access to them. She also warned that with any new technology, the safety and privacy of students should be protected.
AFT released its own set of AI guard rails on Tuesday that focus on educators and provide resources for teachers as they grapple with the new integration of AI in schools. The report lists six core values that focus on maximizing safety and privacy, empowering educators to make decisions on AI, and advancing fairness and equity of the technology among other values.
Get the latest news on Newark's public schools and education policy in New Jersey delivered to your inbox for free every Wednesday and Friday.
By signing up, you agree to our Privacy Notice and European users agree to the data transfer policy. You may also receive occasional messages from sponsors.
Through its Innovation Fund, AFT is also providing over $200,000 to 11 school districts across the country to find solutions to incorporate, understand, and regulate AI with input from educators. The United Federation of Teachers in New York City, Cranston Teachers Alliance in Rhode Island, Pinellas Classroom Teachers Association in Florida, and other union locals will work with their school districts to create AI summits to understand and establish guidelines, provide hands-on training for educators, and establish workshops, panels, and community events.
“I’m not saying that there’s not a way to do it, but who’s responsible for data privacy, who’s responsible for student protection?” Weingarten said.
The state department’s office of innovation plans to meet with educators to obtain feedback, learn how AI is being used in classrooms, and discover existing needs to inform new guidance, resources, and professional development, according to the state’s press release. The department is also part of the Teach AI initiative, a consortium of state departments of education and international organizations that work to create guidelines for AI policy and resources.
Jessie Gómez is a reporter for Chalkbeat Newark, covering public education in the city. Contact Jessie at jgomez@chalkbeat.org.
Despite a rough rollout, nearly the same number of Indiana high school seniors filled out the FAFSA in 2024 as 2023. But there’s still time to fill it out.
The pages break down how much money each school received per student, and allows you to compare it to the citywide average of roughly $21,112 per student.
Some worry that the legislation is not enough to address disparities in enrollment and performance.
Many high school students struggled in the aftermath of COVID. This graduating senior found a talent for wrestling, teaching, and connecting with the classmates who wanted to give up.
Schools are too often punishing and excluding special education students with behavioral issues, Tennessee Disability Coalition says
Muchos estudiantes de high school atravesaron dificultades a consecuencia del COVID. Esta estudiante de último curso descubrió su don para la lucha, enseñar y para conectarse con los compañeros de clase que querían darse por vencidos.
You can’t always get to the school board meeting. Chalkbeat Newark’s reporters will be there to report the news you need. Follow along with our free newsletter.

By signing up, you agree to our Privacy Notice and European users agree to the data transfer policy. You may also receive occasional messages from sponsors.
Become a Chalkbeat sponsor

source

]]>
https://abcsofai.com/2024/06/23/new-jersey-releases-guidance-to-support-ai-moonshot-in-schools-chalkbeat/feed/ 0 29768
Pope Francis becomes first pontiff to address a G7 summit, raising alarm about AI. The G7 responds – The Associated Press https://abcsofai.com/2024/06/18/pope-francis-becomes-first-pontiff-to-address-a-g7-summit-raising-alarm-about-ai-the-g7-responds-the-associated-press/ https://abcsofai.com/2024/06/18/pope-francis-becomes-first-pontiff-to-address-a-g7-summit-raising-alarm-about-ai-the-g7-responds-the-associated-press/#respond Tue, 18 Jun 2024 00:25:17 +0000 https://abcsofai.com/?p=29589 Read More »Pope Francis becomes first pontiff to address a G7 summit, raising alarm about AI. The G7 responds – The Associated Press]]> Copyright 2024 The Associated Press. All Rights Reserved.
Pope Francis arrived on Friday for the roundtable outreach meeting at the annual G7 summit in southern Italy. Francis intends to use the occasion to join the chorus of countries and global bodies pushing for stronger guardrails on AI following the boom in generative artificial intelligence kickstarted by OpenAI’s ChatGPT chatbot.
Pope Francis sits during a working session on AI, Energy, Africa and Mideast, at the G7, Friday, June 14, 2024, in Borgo Egnazia, near Bari, southern Italy. (AP Photo/Alex Brandon)
Pope Francis sits during a working session on AI, Energy, Africa and Mideast, at the G7, Friday, June 14, 2024, in Borgo Egnazia, near Bari, southern Italy. (AP Photo/Alex Brandon)
From left, French President Emmanuel Macron, left, listens to Pope Francis speaking during a working session on Artificial Intelligence (AI), Energy, Africa-Mediterranean, on day two of the 50th G7 summit at Borgo Egnazia, southern Italy, on Friday, June 14, 2024. (Christopher Furlong/Pool Photo via AP)
From left, French President Emmanuel Macron, left, listens to Pope Francis speaking during a working session on Artificial Intelligence (AI), Energy, Africa-Mediterranean, on day two of the 50th G7 summit at Borgo Egnazia, southern Italy, on Friday, June 14, 2024. (Christopher Furlong/Pool Photo via AP)
From left, French President Emmanuel Macron, left, and Italy’s Prime Minister Giorgia Meloni, right, listen to Pope Francis speaking during a working session on Artificial Intelligence (AI), Energy, Africa-Mediterranean, on day two of the 50th G7 summit at Borgo Egnazia, southern Italy, on Friday, June 14, 2024. (Christopher Furlong/Pool Photo via AP)
From left, French President Emmanuel Macron, left, and Italy’s Prime Minister Giorgia Meloni, right, listen to Pope Francis speaking during a working session on Artificial Intelligence (AI), Energy, Africa-Mediterranean, on day two of the 50th G7 summit at Borgo Egnazia, southern Italy, on Friday, June 14, 2024. (Christopher Furlong/Pool Photo via AP)
Pope Francis, center, addresses world leaders during a working session on AI, Energy, Africa and Mideast at the G7 summit, in Borgo Egnazia, near Bari in southern Italy, Friday, June 14, 2024. (AP Photo/Andrew Medichini)
Pope Francis, center, addresses world leaders during a working session on AI, Energy, Africa and Mideast at the G7 summit, in Borgo Egnazia, near Bari in southern Italy, Friday, June 14, 2024. (AP Photo/Andrew Medichini)
U.S. President Joe Biden, right, greets Pope Francis ahead of a working session on Artificial Intelligence (AI), Energy, Africa-Mediterranean, on day two of the 50th G7 summit at Borgo Egnazia, southern Italy, on Friday, June 14, 2024. (Christopher Furlong/Pool Photo via AP)
U.S. President Joe Biden, right, greets Pope Francis ahead of a working session on Artificial Intelligence (AI), Energy, Africa-Mediterranean, on day two of the 50th G7 summit at Borgo Egnazia, southern Italy, on Friday, June 14, 2024. (Christopher Furlong/Pool Photo via AP)
Pope Francis, left, greets U.S. President Joe Biden during a working session on AI, Energy, Africa and Mideast at the G7 summit, in Borgo Egnazia, near Bari in southern Italy, Friday, June 14, 2024. (AP Photo/Andrew Medichini)
Pope Francis, left, greets U.S. President Joe Biden during a working session on AI, Energy, Africa and Mideast at the G7 summit, in Borgo Egnazia, near Bari in southern Italy, Friday, June 14, 2024. (AP Photo/Andrew Medichini)
Indian Prime Minister Narendra Modi, right, greets Pope Francis during a working session on AI, Energy, Africa and Mideast, at the G7, Friday, June 14, 2024, in Borgo Egnazia, near Bari, southern Italy. (AP Photo/Alex Brandon)
Indian Prime Minister Narendra Modi, right, greets Pope Francis during a working session on AI, Energy, Africa and Mideast, at the G7, Friday, June 14, 2024, in Borgo Egnazia, near Bari, southern Italy. (AP Photo/Alex Brandon)
U.S. President Joe Biden, right, greets Pope Francis ahead of a working session on Artificial Intelligence (AI), Energy, Africa-Mediterranean, on day two of the 50th G7 summit at Borgo Egnazia, southern Italy, on Friday, June 14, 2024. (Christopher Furlong/Pool Photo via AP)
U.S. President Joe Biden, right, greets Pope Francis ahead of a working session on Artificial Intelligence (AI), Energy, Africa-Mediterranean, on day two of the 50th G7 summit at Borgo Egnazia, southern Italy, on Friday, June 14, 2024. (Christopher Furlong/Pool Photo via AP)
Pope Francis, left, greets U.S. President Joe Biden during a working session on AI, Energy, Africa and Mideast at the G7 summit, in Borgo Egnazia, near Bari in southern Italy, Friday, June 14, 2024. (AP Photo/Andrew Medichini)
Pope Francis, left, greets U.S. President Joe Biden during a working session on AI, Energy, Africa and Mideast at the G7 summit, in Borgo Egnazia, near Bari in southern Italy, Friday, June 14, 2024. (AP Photo/Andrew Medichini)
Pope Francis, center, addresses world leaders during a working session on AI, Energy, Africa and Mideast at the G7 summit, in Borgo Egnazia, near Bari in southern Italy, Friday, June 14, 2024. (AP Photo/Andrew Medichini)
Pope Francis, center, addresses world leaders during a working session on AI, Energy, Africa and Mideast at the G7 summit, in Borgo Egnazia, near Bari in southern Italy, Friday, June 14, 2024. (AP Photo/Andrew Medichini)
From left, United Arab Emirates President Sheikh Mohamed bin Zayed Al Nahyan, French President Emmanuel Macron, Pope Francis and Italy’s Prime Minister Giorgia Meloni listen to the pontiff speaking during a working session on Artificial Intelligence (AI), Energy, Africa-Mediterranean, on day two of the 50th G7 summit at Borgo Egnazia, southern Italy, on Friday, June 14, 2024. (Christopher Furlong/Pool Photo via AP)
From left, United Arab Emirates President Sheikh Mohamed bin Zayed Al Nahyan, French President Emmanuel Macron, Pope Francis and Italy’s Prime Minister Giorgia Meloni listen to the pontiff speaking during a working session on Artificial Intelligence (AI), Energy, Africa-Mediterranean, on day two of the 50th G7 summit at Borgo Egnazia, southern Italy, on Friday, June 14, 2024. (Christopher Furlong/Pool Photo via AP)
BARI, Italy (AP) — Pope Francis challenged leaders of the world’s wealthy democracies on Frida y to keep human dignity foremost in developing and using artificial intelligence, warning that such powerful technology risks turning human relations themselves into mere algorithms.
Francis brought his moral authority to bear on the Group of Seven, invited by host Italy to address a special session at their annual summit on the perils and promises of AI. In doing so, he became the first pope to attend the G7, offering an ethical take on an issue that is increasingly on the agenda of international summits, government policy and corporate boards alike.
Francis said politicians must take the lead in making sure AI remains human-centric, so that decisions about when to use weapons or even less-lethal tools always remain made by humans and not machines.
“We would condemn humanity to a future without hope if we took away people’s ability to make decisions about themselves and their lives, by dooming them to depend on the choices of machines,” he said. “We need to ensure and safeguard a space for proper human control over the choices made by artificial intelligence programs: Human dignity itself depends on it.”
The G7 final statement largely reflected his concerns.
The leaders vowed to better coordinate the governance and regulatory frameworks surrounding AI to keep it “human-centered.” At the same time, they acknowledged the potential impacts on the labor markets of machines taking the place of human workers and on the justice system of algorithms predicting recidivism.

“We will pursue an inclusive, human-centered, digital transformation that underpins economic growth and sustainable development, maximizes benefits, and manages risks, in line with our shared democratic values and respect for human rights,” they said.
By attending the summit, Francis joined a chorus of countries and global bodies pushing for stronger guardrails on AI following the boom in generative AI kickstarted by OpenAI’s ChatGPT chatbot.
The Argentine pope used his annual peace message this year to call for an international treaty to ensure AI is developed and used ethically. In it, he argued that a technology lacking human values of compassion, mercy, morality and forgiveness is too perilous to develop unchecked.
He didn’t repeat that call explicitly in his speech Friday, but he made clear the onus is on politicians to lead on the issue. He also urged them to ultimately ban the use of lethal autonomous weapons, colloquially known as “killer robots.”
“No machine should ever choose to take the life of a human being,” he said.
On the weapons issue, the G7 leaders said they recognized the impact of AI in the military domain “and the need for a framework for responsible development and use.” They encouraged states to make sure “military use of AI is responsible, complies with applicable international law, particularly international humanitarian law, and enhances international security.”
Italian Premier Giorgia Meloni had invited Francis and announced his participation, knowing the potential impact of his star power and moral authority on the G7. Those seated at the table seemed duly awed, and the boisterous buzz in the room went absolutely quiet when Francis arrived.
“The pope is, well, a very special kind of a celebrity,” said John Kirton, a political scientist at the University of Toronto who directs the G7 Research Group think tank.

Kirton recalled the last summit that had this kind of star power, that then translated into action, was the 2005 meeting in Gleneagles, Scotland. There, world leaders decided to wipe out the $40 billion of the debts owed by 18 of the world’s poorest countries to the World Bank and the International Monetary Fund.
That summit was preceded by a Live 8 concert in London that featured Sting, The Who and a reformed Pink Floyd and drew over a million people in a show of solidarity against hunger and poverty in Africa.
“Gleneagles actually hit a home run and for some it’s one of the most successful summits,” Kirton said.
No such popular pressure was being applied to G7 leaders in the Italian region of Puglia, but Francis knew he could wield his own moral authority to renew his demands for safeguards for AI and highlight the threats to peace and society it poses if human ethics are left to the side.
“To speak of technology is to speak of what it means to be human and thus of our singular status as beings who possess both freedom and responsibility,” he said. “This means speaking about ethics.”
Generative AI technology has dazzled the world with its capabilities to produce humanlike-responses, but it’s also sparked fears about AI safety and led to a jumble of global efforts to rein it in.
Some worry about catastrophic but far off risks to humanity because of its potential for creating new bioweapons and supercharging disinformation. Others fret about its effect on everyday life, through algorithmic bias that results in discrimination or AI systems that eliminate jobs.
In his peace message, Francis echoed those concerns and raised others. He said AI must keep foremost concerns about guaranteeing fundamental human rights, promoting peace and guarding against disinformation, discrimination and distortion.
On the regulation front, Francis was in some ways preaching to the converted as the G7 members have been at the forefront of the debate on AI oversight.
Japan, which held the G7’s rotating presidency last year, launched its Hiroshima AI process to draw up international guiding principles and a code of conduct for AI developers. Adding to those efforts, Prime Minister Fumio Kishida last month unveiled a framework for global regulation of generative AI, which are systems that can quickly churn out new text, images, video, audio in response to prompts and commands.
The European Union was one of the first movers with its wide-ranging AI Act that’s set to take effect over the next two years and could act as a global model. The act targets any AI product or service offered in the bloc’s 27 nations, with restrictions based on the level of risk they pose.
In the United States, President Joe Biden issued an executive order on AI safeguards and called for legislation to strengthen it, while some states like California and Colorado have been trying to pass their own AI bills, with mixed results.
Britain kickstarted a global dialogue on reining in AI’s most extreme dangers with a summit last fall. At a followup meeting in Seoul, companies pledged to develop the technology safely. France is set to host another meeting in the series early next year. The United Nations has also weighed in with its first resolution on AI.
Chan reported from London.
Copyright 2024 The Associated Press. All Rights Reserved.

source

]]>
https://abcsofai.com/2024/06/18/pope-francis-becomes-first-pontiff-to-address-a-g7-summit-raising-alarm-about-ai-the-g7-responds-the-associated-press/feed/ 0 29589