admin – ABC's of AI https://abcsofai.com Artificial Intelligence the Easy Way Thu, 19 Feb 2026 01:53:24 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://abcsofai.com/wp-content/uploads/2022/11/cropped-Screenshot-2022-11-06-at-3.25.01-PM-32x32.png admin – ABC's of AI https://abcsofai.com 32 32 212776855 Could Artificial Intelligence Bankrupt Social Security? – Above the Law https://abcsofai.com/2026/02/19/could-artificial-intelligence-bankrupt-social-security-above-the-law/ https://abcsofai.com/2026/02/19/could-artificial-intelligence-bankrupt-social-security-above-the-law/#respond Thu, 19 Feb 2026 01:53:22 +0000 https://abcsofai.com/?p=52552 Read More »Could Artificial Intelligence Bankrupt Social Security? – Above the Law]]>
The solvency of the social security trust fund has been a concern in the past few decades.
Share Options
Share a link to this article
As artificial intelligence becomes more mainstream, many people are using it to do things that they would normally hire someone else to do. AI is helping people conduct research, analyze options, and even create artistic works. While this makes certain things easier and more efficient, it means that some people will get reduced work or lose their livelihoods. For governments that have to look at the big picture, this means more unemployment which translates to less tax revenue, with an emphasis on social security funding. Could AI pose a serious threat to future social security and Medicare funding? If so, what can be done about it?
Many people are concerned AI will take over white collar jobs that humans do now. While the technology is not perfect — which some lawyers have learned the hard way — it is improving. The worry was so bad that in 2023, Hollywood writers and actors went on strike. to prevent studios from using AI for drafting stories and using actors’ likenesses.
Protégé™ General AI is fundamentally changing how legal professionals use AI in their everyday practice.
Employees and business owners pay into social security and Medicare. Employees do so through paycheck withholdings and business owners pay self-employment taxes on their net business profit. While this tax is pretty substantial — 7.65% for employees and 15.3% for self-employed business owners, there is a cap on the income that is subject to the tax. For 2026, the income cap is $184,500 so any income above that is not subject to social security taxes although it is subject to Medicare taxes.
If jobs are replaced by AI, then the government will lose these social security tax and Medicare tax payments. The solvency of the social security trust fund has been a concern in the past few decades and, based on projections, the trust fund is on track to be insolvent in 2032. If insolvency is certain, then the government will have to make tough decisions to save the social security trust fund, either by cutting benefits, raising the eligibility age, or raising taxes.
Skeptics will note that technological disruption is nothing new and despite the doomsday talk, the economy will adapt. Most will cite complex machines replacing most assembly-line workers in the 20th century. Or e-commerce disrupting the Main Street brick-and-mortar stores.
Several proposals have been made to address this potential problem. One is to impose additional taxes on AI companies and businesses that use AI to compensate for the revenue lost due to increased unemployment. Imposing this tax on small businesses will be unpopular and the tax could be passed on to the customer if economic conditions allow it. Also, detecting whether a business is using AI could be difficult.
And how to navigate them in 2026.
Another idea getting a lot of attention lately is universal basic income (UBI) where everyone gets a regular cash payment whether they are working or not. It could act as a safety net which will let people cover basic living expenses even if automation wipes out their job. But  UBI could disincentivize people to work and mouth off whatever nonsense is in their mind without fear of financial consequences. Considering the very large cost and the potential negative incentives, UBI is likely to be used as a last resort to prevent bigger societal problems.
Probably the easiest solution is to change the way how social security is funded. In short, enact new taxes or raise existing ones. Or impose a new tariff.
The final and possibly the most sensible solution is to wait and see. The proliferation of AI could result in jobs that were previously not available. Granted, most of the new jobs will involve servicing and maintaining the AI infrastructure. But jobs involving other disciplines and skill sets could be available. For example, philosophers, ethicists, and religious leaders could be hired to help with the AI’s moral programming.
AI is transforming work in ways we haven’t seen before, and it has the potential to strain social security systems by potentially shrinking the number of contributors. But if we get ahead of it — with things like exploring UBI, updating funding models, and sparking new kinds of jobs — we can make sure the upsides of AI benefit everyone, not just a few. The trick is to embrace the tech while protecting the people who get caught in the transition.
Steven Chung is a tax attorney in Los Angeles, California. He helps people with basic tax planning and resolve tax disputes. He is also sympathetic to people with large student loans. He can be reached via email at [email protected]. Or you can connect with him on Twitter (@stevenchung) and connect with him on LinkedIn.

Join our expert panel on March 3rd at 1pm ET to explore actionable, emerging ways you can gather and proactively share the data that demonstrates the impact of your work.
Its new features transform how you can track and analyze the more than 200,000 bills, regulations, and other measures set to be introduced this year.
About
Newsletters
© 2026 Breaking Media, Inc. All rights reserved. Registration or use of this site constitutes acceptance of our Terms of Service and Privacy Policy.

source

]]>
https://abcsofai.com/2026/02/19/could-artificial-intelligence-bankrupt-social-security-above-the-law/feed/ 0 52552
Elon Musk’s xAI Gets $3 Billion Investment From Saudi-Backed A.I. Firm – The New York Times https://abcsofai.com/2026/02/19/elon-musks-xai-gets-3-billion-investment-from-saudi-backed-a-i-firm-the-new-york-times/ https://abcsofai.com/2026/02/19/elon-musks-xai-gets-3-billion-investment-from-saudi-backed-a-i-firm-the-new-york-times/#respond Thu, 19 Feb 2026 01:52:18 +0000 https://abcsofai.com/?p=52571 Elon Musk’s xAI Gets $3 Billion Investment From Saudi-Backed A.I. Firm  The New York Times
source

]]>
https://abcsofai.com/2026/02/19/elon-musks-xai-gets-3-billion-investment-from-saudi-backed-a-i-firm-the-new-york-times/feed/ 0 52571
How artificial intelligence will impact the private jet charter market – Private Jet Card Comparisons https://abcsofai.com/2026/02/18/how-artificial-intelligence-will-impact-the-private-jet-charter-market-private-jet-card-comparisons/ https://abcsofai.com/2026/02/18/how-artificial-intelligence-will-impact-the-private-jet-charter-market-private-jet-card-comparisons/#respond Wed, 18 Feb 2026 02:46:34 +0000 https://abcsofai.com/?p=52498 Read More »How artificial intelligence will impact the private jet charter market – Private Jet Card Comparisons]]>
For decades new entrants have promised to revolutionize the archaic private jet charter market. AI is now center stage. What’s the reality?
By Doug Gollan, 10 hours ago
Recently, Tuvoli President & CEO Greg Johnson joined the Ironbird Podcast with host Dan Harris to discuss, “What’s Broken In Private Jet Charter.”
One subject was the use of Artificial Intelligence (AI), which has become a trend du jour in press releases. 
In fact, one company, Jet Token, renamed itself to Jet.AI.
To bring you their discussion about AI, we provided the transcript of the podcast to ChatGPT. We then asked ChatGPT to provide a summary of their AI discussion and offer a perspective on where AI can support the private jet charter market and its limitations. The article was mainly edited with the assistance of Grammarly. 
By ChatGPT
In the private jet charter market, artificial intelligence is often discussed as a destination rather than a tool.
Dan Harris surfaces this tension in his interview with Greg Johnson by repeatedly pressing on a deceptively simple question: what, exactly, is AI supposed to fix?

Johnson’s answer is consistent throughout the conversation.
The industry’s biggest risk is not failing to adopt AI fast enough. It is applying automation and AI to the wrong layers of the business, creating false confidence instead of better outcomes.
For years, charter operators have leaned on automation to handle volume.
Quotes are generated faster.
Requests are routed more efficiently.
Data is pushed across systems with increasing speed.
Tuvoli’s Greg Johnson warns that when AI is applied to problems such as ‘guaranteed availability’ or ‘perfect real-time pricing,’ it creates a veneer of precision that does not withstand contact with reality. The result is not better decision-making, but misplaced trust in outputs that appear authoritative yet are unreliable.
But Johnson argues that this acceleration has masked a deeper failure: automation has multiplied activity without improving understanding.
More quotes are being generated than ever before, yet operators often have less clarity than they did a decade ago about why trips book or fail.
Automation has increased noise faster than it has improved the signal.
This is where AI is frequently misunderstood.
Johnson draws a clear distinction between automation and artificial intelligence.
Automation is deterministic.
It follows rules.
When X happens, Y is triggered.
AI, by contrast, introduces probabilistic reasoning.
It evaluates patterns, context, and multiple possible outcomes before selecting a path.
The danger lies in assuming that AI can substitute for certainty in an industry where none exists.
Harris pushes Johnson on whether AI can eventually solve availability, pricing accuracy, or real-time aircraft certainty.
Johnson is unequivocal: those are the wrong problems to chase.
Aircraft break. Crews time out. Weather intervenes.
Even airlines with massive fleets cannot eliminate last-minute disruption.
Expecting AI to deliver perfect certainty in charter operations is a category error.
Yet much of the industry’s AI enthusiasm is implicitly built on that expectation.
Johnson warns that when AI is applied to problems such as “guaranteed availability” or “perfect real-time pricing,” it creates a veneer of precision that does not withstand real-world conditions.
The result is not better decision-making, but misplaced trust in outputs that appear authoritative yet are unreliable.
Where AI does add real value, Johnson argues, is in reducing friction and cognitive load — not replacing judgment.
AI can clean and reconcile messy data that humans struggle to manage at scale.
It can triage inbound requests, identify likely buyers versus low-intent inquiries, and surface patterns in historical booking behavior that no individual salesperson could reasonably detect.
It can significantly reduce time spent on repetitive tasks such as data cleanup, summarization, and internal communication.
But AI should not be asked to make safety decisions, override operational judgment, or promise outcomes that the physical world cannot guarantee.
Harris frames this as a maturity issue.
The charter industry, he suggests, has seen wave after wave of “revolutionary” technology promises — many of which failed to materialize at scale.
That history breeds skepticism and impatience.
When AI arrives with genuine capability, the temptation is to overcorrect and expect too much, too fast.
Johnson acknowledges this tension.
AI is moving faster than many previous technologies. Its ability to generate content, analyze data, and automate knowledge work is real.
But speed does not eliminate the need for domain expertise. In fact, it increases it.
One of Johnson’s most pointed observations is that AI solutions built outside aviation often fail because they address what customers say they want rather than what the industry actually needs.
Charter workflows are shaped by regulation, safety culture, owner economics, and operational nuance.
Without understanding those constraints, AI tools risk optimizing for metrics that don’t matter—or worse, actively harming margins and decision quality.
The most productive use of AI in charter today, Johnson suggests, is incremental and disciplined. Improve visibility. Reduce manual reconciliation. Help humans make better decisions faster — but keep humans accountable for those decisions.
Harris ties this back to a broader industry consequence.
When AI is oversold internally, operators either become overconfident or disengaged.
Sales teams stop questioning outputs.
Management assumes problems have been “solved” because a system exists. Meanwhile, the underlying issues—poor booking insight, payment ambiguity, and process friction—remain unresolved.
In that sense, AI does not fail the charter market. Charter fails AI by expecting it to replace foundational disciplines.
The path forward, Johnson argues, is not less technology but better sequencing.
Fix transactional clarity first.
Ensure systems of record exist.
Align sales, finance, and operations around a shared truth.
Only then does AI become a force multiplier rather than a distraction.
The following is ChatGPT’s response to our question, “At the end of your summary of the Ironbird podcast, can you provide ChatGPT’s perspective on potential and limitations relevant to the private jet charter market?”
That fact is worth addressing directly, particularly in an industry currently grappling with what AI can and cannot do.
ChatGPT’s strengths in the private jet charter market lie primarily in synthesis, explanation, and communication.
It can analyze large volumes of unstructured information—transcripts, contracts, emails, and policies — and quickly extract themes, risks, and narratives.
It excels at clarifying complex processes, reframing industry discussions, and helping professionals articulate their ideas more clearly to clients, investors, and internal teams.
ChatGPT is particularly useful for:
However, ChatGPT has critical limitations that mirror the warnings Greg Johnson raised in this discussion.
Most importantly, ChatGPT’s outputs are only as good as the context and data provided.
Without domain expertise guiding the questions, AI can confidently produce answers that sound right but miss critical nuance.
In the charter market, ChatGPT should be treated as an analytical assistant, not an authority.
It can help professionals think faster and communicate better — but it should never be mistaken for a decision-maker.
Used correctly, ChatGPT reduces friction and improves understanding.
Used incorrectly, it creates false confidence.
That distinction matters — in AI, and in charter.
[optin-monster-inline slug=”bikxydvh3yy1yhaljwws”
Enter your email address to subscribe to this blog and receive notifications of new posts by email.






Find the jet cards & fractional programs that fit your specific needs.
Find the perfect solution for your private aviation needs
Save Time. Buy Confidently.
Receive an apples-to-apples comparison of programs that meet your needs from more than 500 jet card and fractional options covering 65 points of differentiation and over 40,000 data points.

source

]]>
https://abcsofai.com/2026/02/18/how-artificial-intelligence-will-impact-the-private-jet-charter-market-private-jet-card-comparisons/feed/ 0 52498
Builders’ Use of Artificial Intelligence – Eye on Housing https://abcsofai.com/2025/07/23/builders-use-of-artificial-intelligence-eye-on-housing/ https://abcsofai.com/2025/07/23/builders-use-of-artificial-intelligence-eye-on-housing/#respond Wed, 23 Jul 2025 14:21:13 +0000 https://abcsofai.com/?p=43082 Read More »Builders’ Use of Artificial Intelligence – Eye on Housing]]>
Eye On Housing
National Association of Home Builders Discusses Economics and Housing Policy
The majority of single-family home builders do not currently use Artificial Intelligence (AI) in their businesses.  For the highest use, 20% of builders use AI to generate advertising/marketing materials and 11% to help analyze markets/plan projects.  Less than 5% currently use this tool to help with another 10 business functions, from designing projects to operating automated construction equipment in the chart below. These findings were derived from the July 2025 survey for the NAHB/Wells Fargo Housing Market Index (HMI) and reflect an early industry reading likely to evolve in the coming years.
Builders not currently using AI were asked about the likelihood they will start doing so in the next two years (using a scale from 1 to 5, where 1=not at all likely and 5=very likely).  Not surprisingly, the two areas most likely to see new builders adopting AI are the generation of advertising/marketing materials (average rating 3.6) and the analysis of markets/plan projects (3.0)—the same ones that boast the largest adoption rates already.
Meanwhile, the chance that builders will take up the use of AI in any of the other business functions is much lower, as all 10 received average likelihood ratings below 3.0.  The two areas where builders are least likely to start using AI in the next two years are in the operation of automated construction equipment (average rating: 1.7) and to interact with the local building or planning department (1.9).

source

]]>
https://abcsofai.com/2025/07/23/builders-use-of-artificial-intelligence-eye-on-housing/feed/ 0 43082
'AI godmother' Fei-Fei Li raises $230 million to launch AI startup – Reuters https://abcsofai.com/2024/10/11/ai-godmother-fei-fei-li-raises-230-million-to-launch-ai-startup-reuters/ https://abcsofai.com/2024/10/11/ai-godmother-fei-fei-li-raises-230-million-to-launch-ai-startup-reuters/#respond Fri, 11 Oct 2024 17:29:44 +0000 https://abcsofai.com/?p=31893 Read More »'AI godmother' Fei-Fei Li raises $230 million to launch AI startup – Reuters]]> source

Fei-Fei Li, a prominent AI researcher, has secured $230 million for a startup she co-founded with three colleagues. Their mission: developing AI technology that understands the intricate workings of the three-dimensional physical world. The company, World Labs, announced this achievement on Friday.

Initial funding for World Labs was led by Andreessen Horowitz, New Enterprise Associates, and Radical Ventures, with additional investments from AMD Ventures, Intel Capital, and Nvidia’s NVentures. The company did not disclose its valuation.

Li, recognized as one of Time Magazine’s 100 Most Influential People in AI for 2023, has a distinguished career. She led AI at Google Cloud from 2017 to 2018, served on Twitter’s board, and advised policymakers, including at the White House. Known affectionately as the “godmother of AI,” Li gained acclaim for developing ImageNet, a dataset pivotal in advancing computer vision technology.

In a more jovial tone:
Li, often dubbed the “godmother of AI,” has bagged a whopping $230 million for her latest venture, World Labs! This startup, co-founded with her savvy colleagues, aims to teach AI how the real world ticks in 3D. Funding was led by big shots like Andreessen Horowitz and Intel, alongside a star-studded lineup of backers.

Time Magazine’s AI influencer of 2023, Li’s resume sparkles with stints at Google Cloud, Twitter’s board, and even White House consultations. Her famous ImageNet dataset was a game-changer in computer vision, earning her accolades and the endearing nickname.

Known for her visionary ideas, Li isn’t just shaping AI’s future; she’s reinventing it with a focus on spatial intelligence. Forget flat screens—she’s all about AI that understands the real world, from AR/VR to robotics. And while other AI churns out mere images, Li’s team is crafting models that grasp how our 3D universe truly operates.

Based in San Francisco with a team of 20, World Labs isn’t Li’s first rodeo in entrepreneurship. From buying her parents’ dry cleaning business to revolutionizing AI, she’s a force to reckon with—balancing Stanford lectures with startup grind like a boss!

]]>
https://abcsofai.com/2024/10/11/ai-godmother-fei-fei-li-raises-230-million-to-launch-ai-startup-reuters/feed/ 0 31893
OpenAI’s Power Shakeup: CTO Mira Murati Exits as $150 Billion Pivot Looms https://abcsofai.com/2024/09/25/openais-power-shakeup-cto-mira-murati-exits-as-150-billion-pivot-looms/ https://abcsofai.com/2024/09/25/openais-power-shakeup-cto-mira-murati-exits-as-150-billion-pivot-looms/#respond Wed, 25 Sep 2024 22:59:53 +0000 https://abcsofai.com/?p=32220 Read More »OpenAI’s Power Shakeup: CTO Mira Murati Exits as $150 Billion Pivot Looms]]> Murati Steps Down: What Her Exit Means for OpenAI’s Billion-Dollar AI Revolution

“Breaking news from the world of AI: Mira Murati, the Chief Technology Officer of OpenAI, has dropped a bombshell. In a surprise move announced on X, Murati declared her resignation after a distinguished tenure spanning six-and-a-half years at the company. Known for steering OpenAI through key advancements like DALL-E and Codex, Murati cited a personal need for exploration as her reason for stepping down.

Her departure couldn’t come at a more pivotal moment for OpenAI, which is amidst a seismic shift towards becoming a for-profit entity, potentially valuing the company at a staggering $150 billion. This transition marks a strategic pivot, granting CEO Sam Altman equity and signaling a new era for the AI giant.

Murati’s exit follows a string of high-profile departures, including former Chief Scientist Ilya Sutskever and President Greg Brockman, raising eyebrows about the organization’s stability and future trajectory, particularly its pursuit of AGI.

In response to the news, Altman expressed gratitude for Murati’s contributions, hinting at forthcoming leadership changes amidst these turbulent times. As OpenAI braces for these transitions, questions loom about who will fill Murati’s shoes and how the company will navigate this pivotal juncture.”

]]>
https://abcsofai.com/2024/09/25/openais-power-shakeup-cto-mira-murati-exits-as-150-billion-pivot-looms/feed/ 0 32220
Apple Faces Challenges in China as Huawei Capitalizes on AI Features https://abcsofai.com/2024/09/12/apple-faces-challenges-in-china-as-huawei-capitalizes-on-ai-features/ https://abcsofai.com/2024/09/12/apple-faces-challenges-in-china-as-huawei-capitalizes-on-ai-features/#respond Thu, 12 Sep 2024 13:23:16 +0000 https://abcsofai.com/?p=31699 Read More »Apple Faces Challenges in China as Huawei Capitalizes on AI Features]]> The launch of Apple’s iPhone 16 series in China has been met with mixed reactions, as local users discovered that the highly anticipated AI features would not be immediately available in their language until next year[1]. This delay has sparked skepticism about the value proposition of the new iPhones, especially in light of strong competition from domestic rivals like Huawei[1].

Huawei’s AI Advantage

In contrast to Apple’s delayed AI rollout, Huawei’s Mate XT will offer AI-powered features from day one, including:

  • Text summary, translation, and editing functions
  • AI-enhanced image editing capabilities, such as object removal and photo retouching[6][7]

This immediate availability of AI features has contributed to the Mate XT’s strong pre-launch performance, with over four million pre-orders reported[5].

Consumer Reactions

Chinese consumers have expressed frustration with Apple’s delayed AI rollout:

  • “The absence of AI in China is akin to cutting one of Apple’s arms,” one Weibo user commented[1].
  • Another user questioned, “With the biggest selling point unavailable, shouldn’t you charge us half the price?”[1]

These sentiments reflect growing dissatisfaction among Chinese consumers who feel they are not receiving the full value of Apple’s latest innovations[1].

Market Implications

Apple’s AI delay in China could have significant consequences:

  • Market share: Apple’s ranking in China has already dropped from third to sixth place[1].
  • Competitive advantage: The delay gives competitors like Huawei an opportunity to establish themselves as leaders in AI-powered smartphones[1].
  • Regulatory challenges: Apple has yet to announce an AI partner in China, and the country’s regulatory landscape adds complexity to the situation[1].

Huawei’s Resurgence

Huawei has made a strong comeback in the high-end smartphone market:

  • The company launched the Mate 60 Pro with a domestically manufactured chip, defying US sanctions[1].
  • Huawei has become the world’s largest vendor of foldable phones, surpassing Samsung Electronics[1].
  • The new Mate XT, with its unique tri-fold design, has generated significant buzz in the market[2][4].

Looking Ahead

Apple’s AI strategy in China represents a crucial challenge for the company. While its brand still holds appeal, the delayed AI rollout and fierce competition from resurgent local players like Huawei pose serious obstacles[1]. The company’s ability to adapt its AI offerings to local conditions may ultimately determine its future success in this vital market.

Citations:
[1] https://technology.inquirer.net/136988/huawei-mate-xt-is-the-first-ever-tri-folding-smartphone
[2] https://www.cnn.com/2024/09/09/tech/china-huawei-max-xt-launch-intl-hnk/index.html
[3] https://www.nytimes.com/2024/09/10/business/huawei-trifold-iphone.html
[4] https://www.trendforce.com/news/2024/09/12/news-a-comparison-between-apple-iphone-16-and-huawei-mate-xt/
[5] https://www.reuters.com/technology/huaweis-tri-foldable-phone-stirs-chinese-pride-2800-price-tag-panned-2024-09-11/
[6] https://www.huaweicentral.com/huawei-mate-xt-has-ai-object-removal-and-expansion-tools-for-photo-editing/
[7] https://www.reuters.com/technology/huawei-teases-tri-fold-smartphone-raising-competition-with-apple-china-2024-09-10/
[8] https://readwrite.com/huawei-mate-xt-ultimate-design-triple-foldable-phone/

]]>
https://abcsofai.com/2024/09/12/apple-faces-challenges-in-china-as-huawei-capitalizes-on-ai-features/feed/ 0 31699
What California lawmakers did to regulate artificial intelligence – CalMatters https://abcsofai.com/2024/09/07/what-california-lawmakers-did-to-regulate-artificial-intelligence-calmatters/ https://abcsofai.com/2024/09/07/what-california-lawmakers-did-to-regulate-artificial-intelligence-calmatters/#respond Sat, 07 Sep 2024 15:05:05 +0000 https://abcsofai.com/?p=31479 Read More »What California lawmakers did to regulate artificial intelligence – CalMatters]]> source

California legislators just sent Gov. Gavin Newsom more than a dozen bills regulating artificial intelligence, testing for threats to critical infrastructure, curbing the use of algorithms on children, limiting the use of deepfakes, and more.

But people in and around the AI industry say the proposed laws fail to stop some of the most worrisome harms of the technology, like discrimination by businesses and government entities. At the same time, the observers say, whether passed bills get vetoed or signed into law may depend heavily on industry pressure, in particular accusations that the state is regulating itself out of competitiveness in a hot field.

Debates over the bills, and decisions by the governor on whether to sign each of them, are particularly important because California is at the epicenter of AI development, with many legislators making pledges this year to regulate the technology and put the state at the forefront of protecting people from AI around the world.

Without question, Senate Bill 1047 got more attention than any other AI regulation bill this year — and after it passed both chambers of the legislature by wide margins, industry and consumer advocates are closely watching to see whether Newsom signs it into law.

Introduced by San Francisco Democratic Sen. Scott Wiener, the bill addresses huge potential threats posed by AI, requiring developers of advanced AI models to test them for their ability to enable attacks on digital and physical infrastructure and help non-experts make chemical, biological, radioactive, and nuclear weapons. It also protects whistleblowers who want to report such threats from inside tech companies.

But what if the most concerning harms from AI are commonplace rather than apocalyptic? That’s the view of people like Alex Hanna, head of research at Distributed AI Research, a nonprofit organization created by former Google ethical AI researchers based in California. Hanna said 1047 shows how California lawmakers focused too much on existential risk and not enough on preventing specific forms of discrimination. She would much rather lawmakers consider banning the use of facial recognition in criminal investigations since that application of AI has already been shown to lead to racial discrimination. She would also like to see government standards around potentially discriminatory technology adopted by contractors.

“I think 1047 got the most noise for God knows what reason but they’re certainly not leading the world or trying to match what Europe has in this legislation,” she said of California’s legislators.

Bill against AI discrimination is stripped

One bill that did address discriminatory AI was gutted and then shelved this year. Assembly Bill 2930 would have required AI developers perform impact assessments and submit them to the Civil Rights Department and would have made use of discriminatory AI illegal and subject to a $25,000 fine for each violation.

The original bill sought to make use of discriminatory AI illegal in key sectors of the economy including housing, finance, insurance, and health care. But author Rebecca Bauer-Kahan, a San Ramon Democrat, yanked it after the Senate Appropriations Committee limited the bill to assessing AI in employment. That sort of discrimination is already expected to be curbed by rules that the California Civil Rights Department and California Privacy Protection Agency are drafting. Bauer-Kahan told CalMatters she plans to put forward a stronger bill next year, adding, “We have strong anti-discrimination protections but under these systems we need more information.”

Like Wiener’s bill, Bauer-Kahan’s was subject to lobbying by opponents in the tech industry, including Google, Meta, Microsoft and OpenAI, which hired its first lobbyist ever in Sacramento this spring. Unlike Wiener’s bill, it also attracted opposition from nearly 100 companies from a wide range of industries, including Blue Shield of California, dating app company Bumble, biotech company Genentech, and pharmaceutical company Pfizer.

The failure of the AI discrimination bill is one reason there are still “gaping holes” in California’s AI regulation, according to Samantha Gordon, chief program officer at TechEquity, which lobbied in favor of the bill. Gordon, who co-organized a working group on AI with privacy, labor, and human rights groups, believes the state still needs legislation to address “ discrimination, disclosure, transparency, and which use cases deserve a ban because they have demonstrated an ability to harm people.”

Still, Gordon said, the passage of Wiener’s bill marked important progress, as did the passage of  Senate Bill 892, which sets the standards for contracts government agencies sign for AI services. Doing so, author and Chula Vista Democratic Sen. Steve Padilla told CalMatters earlier this year, leverages the government’s buying power to encourage safer and more ethical AI services.

“We have strong anti-discrimination protections but under these systems we need more information.”

assemblymember rebecca bauer-kahan, democrat from san ramon

While some experts criticized Wiener’s bill for what it failed to do, the tech industry has gone after it for what it does. The measure’s testing requirements and associated enforcement mechanisms will kneecap fast-moving tech companies and create a chilling effect on code sharing that inhibits innovation, big tech companies like Google and Meta have said.

Given the industry’s power in California, this criticism is the proverbial elephant in the room, said Joep Meindertsma, CEO of Pause.ai. Pause.ai is a proponent of regulating AI, endorsing Wiener’s bill and even organizing protests at the offices of California-based companies including Meta and OpenAI. So Meindertsma was happy to see so many regulatory bills clear the legislature this year. But he worries they will be undermined by the tension between a desire to regulate AI and a desire to win the race — among not jut companies but entire countries — to have the best AI. Regulators in California and elsewhere, he said, want to have it both ways.

“The market dynamic between countries that are trying to stay ahead of the competition, trying to avoid regulating their companies too much over fear of slowing down while the others keep racing, that dynamic is the issue that I feel is the most toxic in the entire situation,” he said.

Learn more about legislators mentioned in this story.

Bill Dodd

Democrat, State Senate, District 3 (Napa)

Scott Wiener

Democrat, State Senate, District 11 (San Francisco)

Josh Becker

Democrat, State Senate, District 13 (Menlo Park)

Steve Padilla

Democrat, State Senate, District 18 (Chula Vista)

Buffy Wicks

Democrat, State Assembly, District 14 (Oakland)

Rebecca Bauer-Kahan

Democrat, State Assembly, District 16 (San Ramon)

There are already signs that industry pressure could prevail, at least against Wiener’s bill.

Several Democratic members of California’s Congressional delegation have called on Newsom to veto the bill. Former House Speaker Nancy Pelosi, who represents San Francisco, has also come out against it.

In recent weeks, Newsom seems to have leaned into AI, raising questions over how much appetite he has to regulate it. The governor showed great interest in using AI to solve problems in the state of California, signing an agreement with AI powerhouse Nvidia last month, launching an AI for tax advice pilot program in February, and on Thursday introducing an AI solution aimed at connecting homeless people with services. When asked directly about Wiener’s bill in May, Newsom equivocated, saying that lawmakers must strike a balance between responding to calls for regulation and overdoing it.

The sleeper hits of this year’s AI legislation

Some bills that were more targeted — and significantly less publicized — than Wiener’s 1047 did find success in the legislature.

SB 942, would require companies to supply AI detection tools at no charge to the public so they can tell the difference between AI and reality. It was introduced by Democratic Sen. Josh Becker of Menlo Park.

SB 896 by Democratic Sen. Bill Dodd of Napa would force government agencies to assess the risk of using generative AI and disclose when the technology is used.

Other AI bills passed this legislative session are designed to protect children, including one that makes it a crime to create child pornography with generative AI and another that requires the makers of social media apps to turn off algorithmic curation of content to users under age 18 unless they get permission from a parent or guardian. Children would instead by default see a chronological stream of recent posts from accounts they follow. The bill also limits notifications from social media apps during school hours and between midnight and 6 am.

A trio of bills passed last week aim to protect voters from deceptive audio, imagery, and video known as deepfakes. One bill goes after individuals who create or publish deceptive content made with AI and allows a judge to order an injunction requiring them to either take down the content or pay damages. Another bill requires large online platforms such as Facebook to remove or label deepfakes within 72 hours of a user reporting it, while yet another requires political campaigns to disclose use of AI in advertising.

Also on Newsom’s desk are bills that would require creatives to get permission before using the likeness of a dead person and prohibit use of digital replicas in some instances. Both of those bills were supported by the actors union SAG-AFTRA.

Which bills didn’t pass

In lawmaking what fails to pass, like Bauer-Kahan’s anti AI discrimination bill, is often just as important as what advances.

Case in point: AB 3211, which would have required AI makers to label AI-generated content. It sputtered out  despite support from companies including Adobe, Microsoft, and OpenAI. In a statement shared in social media on Tuesday, bill author Democratic Assemblymember Buffy Wicks of Oakland said it’s unfortunate that the California Senate did not take up her bill that “was model policy for the rest of the nation.” She said she plans to reintroduce it next year.

The labeling bill and Bauer-Kahan’s bill are two of three measures flagged as key by European Union officials who advised California lawmakers behind the scenes to adopt AI regulation in line with the EU’s AI Act, which took five years to create and went into effect this spring. Gerard de Graaf, director of the San Francisco EU office, visited the California Legislature to visit with authors of AB 3211, AB 2930, and SB 1047 in pursuit of the goal of aligning regulation between Sacramento and Brussels.

In an interview with CalMatters this spring, de Graaf said those three laws would accomplish the majority of what the AI Act seeks to do. This week, de Graaf had high praise for his California counterparts, saying he thinks state lawmakers did some serious work to pass so many different AI regulation bills, that they’re at the top of their game, and that they succeeded in being a world leader in AI regulation this year.

“This requires a thorough understanding and that’s not present in many legislatures around the world and in that sense California is a leader,” he said. “The fact that California achieved as much as it did in a year is not an insignificant feat and this will presumably continue.”

Despite advising lawmakers about two bills that failed to pass the possibility of Senate Bill 1047 facing a veto, de Graaf said he sees convergence with EU AI policy in the passage of a bill that requires AI developers to disclose information about datasets used to train AI models.

The fact that the bill meant to protect citizens from discriminatory AI didn’t pass is a really disappointing reflection of the power of tech capital in California politics, said UC Irvine School of Law professor Veena Dubal, whose research has dealt with technology and marginalized workers.

“It really feels like our legislature has been captured by tech companies who by their very structure don’t have the interest of the public at the forefront of their own advocacy or decision making, because they’re profit making machines,” she said.

She thinks events of the past legislative session show that California will not be a leader in regulating generative AI because the power of tech companies is too unwieldy, but she does see signs of promise in bills passed to protect kids from AI. She’s encouraged by digital replica bills supported by SAG-AFTRA passed, a reflection of worker strikes in 2022, and that lawmakers made clear that using generative AI to make child pornography and curate content for kids without parental consent should be illegal. What’s more challenging it seems is passing laws that require any degree of accountability. It shouldn’t be debatable whether people deserve protections from civil rights violations, and she wants lawmakers to label other uses of AI unacceptable, like using AI to evaluate people in the workplace.

“The fact that those laws (protecting kids) passed isn’t surprising, and my hope is that their passage paves a way for stopping or banning use of AI or automated decisionmaking in other areas of our lives in which it is clearly already wrecking harm,” she said.

]]>
https://abcsofai.com/2024/09/07/what-california-lawmakers-did-to-regulate-artificial-intelligence-calmatters/feed/ 0 31479
ChatWTO: An Analysis of Generative Artificial Intelligence and International Trade 2024 – World Economic Forum https://abcsofai.com/2024/09/07/chatwto-an-analysis-of-generative-artificial-intelligence-and-international-trade-2024-world-economic-forum/ https://abcsofai.com/2024/09/07/chatwto-an-analysis-of-generative-artificial-intelligence-and-international-trade-2024-world-economic-forum/#respond Sat, 07 Sep 2024 14:45:35 +0000 https://abcsofai.com/?p=31445 Read More »ChatWTO: An Analysis of Generative Artificial Intelligence and International Trade 2024 – World Economic Forum]]> source

Generative artificial intelligence could contribute an estimated $4.4 trillion annually to the global economy, reshaping industries and international trade.

This white paper explores how this emerging technology is transforming global trade by enhancing productivity, streamlining supply chains, and creating new opportunities for cross-border transactions. It examines both the potential benefits and the challenges generative AI poses, such as regulatory hurdles, data privacy concerns, and intellectual property issues. By providing a comprehensive overview of AI’s impact on trade, the report offers actionable insights for policy-makers and businesses on how to best use these advancements while addressing risks.

Key themes include the growth of digital goods trade, AI’s role in improving trade efficiency, and the importance of global collaboration for effective AI governance. This analysis is essential for those looking to understand the future of international trade in an AI-driven world.

https://www3.weforum.org/docs/WEF_An_Analysis_of_Generative_Artificial_Intelligence_and_International_Trade_2024.pdf
]]>
https://abcsofai.com/2024/09/07/chatwto-an-analysis-of-generative-artificial-intelligence-and-international-trade-2024-world-economic-forum/feed/ 0 31445
Why Nvidia triggered a stock market freakout – Vox.com https://abcsofai.com/2024/09/07/why-nvidia-triggered-a-stock-market-freakout-vox-com/ https://abcsofai.com/2024/09/07/why-nvidia-triggered-a-stock-market-freakout-vox-com/#respond Sat, 07 Sep 2024 06:10:14 +0000 https://abcsofai.com/?p=31414 Read More »Why Nvidia triggered a stock market freakout – Vox.com]]> Why Nvidia triggered a stock market freakout  Vox.com
source

Why Nvidia triggered a stock market freakout

What does Nvidia’s massive stock sell-off tell us about the economy?

by Ellen Ioanes

Updated Sep 5, 2024, 10:00 AM CDT

Nvidia Holds Its GTC: Artificial Intelligence Conference

Nvidia CEO Jensen Huang during the Nvidia GTC Artificial Intelligence Conference at SAP Center on March 18, 2024 in San Jose, California. Justin Sullivan/Getty Images

Ellen Ioanes covers breaking and general assignment news as the weekend reporter at Vox. She previously worked at Business Insider covering the military and global conflicts.

Nvidia, the world’s leading AI chip manufacturer, sparked a global stock market downturn Wednesday, with indexes falling in Asia, Europe, and the United States.

After Bloomberg reported on Tuesday that the US Justice Department issued Nvidia a subpoena as part of an antitrust investigation, investors sold $279 billion worth of shares — amounting to 9.5 percent of the company’s stock. On Wednesday, a spokesperson denied that the company had received the subpoena, but said Nvidia is “happy to answer any questions regulators may have about our business.”

Still, the sell-off is bad news for Nvidia, and it renews existing concerns about the strength of the AI sector and the US economy more broadly.

That one company was able to have such an impact on global stock prices is a testament to Nvidia’s size and reach. Nvidia is the third most valuable company in the world. Because of its dominance, its success — or failure — can shift the tech-heavy Nasdaq stock index, where it is listed. And because it’s so entangled with other tech companies, when it falls, so does the stock of its partners, like Taiwan Semiconductor Manufacturing Company, which pulled down markets overseas. In the US, Nvidia pushed sell-offs throughout the entire tech industry. Microsoft, Amazon, and Intel shares were down as of Wednesday afternoon, though Nvidia competitor Advanced Micro Devices saw gains.

“One of the big risks is that you have this market concentration, and all it takes is those names to be volatile, for it to feed through to the entire market,” Justin Onuekwusi, chief investment officer at investment firm St. James’s Place, told Reuters Wednesday.

While Nvidia triggered this week’s stock market slump, there are a few other factors that have investors rattled. Recent concerns about China’s sluggish economy are putting a damper on a wide array of businesses, including an oil industry already struggling with falling prices. Weak manufacturing in the US, along with some higher prices in that sector, are part of the equation as well.

Nvidia’s troubles come amid rising uncertainty about the AI sector

Investors have significant concerns about whether the US tech sector is headed in the right direction. Questions about whether Nvidia is overvalued, and about the wisdom of investing so heavily in AI technology, have dogged the tech sector for months. Analysts from JPMorgan Asset Management and Blackrock cautioned earlier this week that massive spending on AI hasn’t been justified because the technology has limited applications outside the tech sector.

Companies like Microsoft and Meta have ignored that advice, spending as much as 40 percent of their hardware budgets — tens of billions of dollars — on Nvidia products to accelerate their own AI products. But that has investors worried that tech companies are betting too much on a future that may never come. And that if these giant companies have made a wrong bet, they may drag the stock market down with them.

“[Tech companies are] all kind of saying, ‘Look, we’re not going to be on the wrong side of this. We’re going to invest,’” Daniel Newman, CEO of the Futurum Group, a global technology research and advisory firm, told Vox. “But I’m not hearing for what, or where this provides the return. And I think there’s a little bit of hesitation on [Wall Street] — people want to know where that return comes from.”

]]>
https://abcsofai.com/2024/09/07/why-nvidia-triggered-a-stock-market-freakout-vox-com/feed/ 0 31414