What's new

Chat GPT and artificial intelligence

LLM have reached a certain capacity now it seems, which is good as this will stop the panic mode and finally people can start to build tools agents.

LLMs will not achieve AGI as per majority, that in hindsight does seem like will be their limitation but all LLM capabilities currently will help enhancing every industry, problem is economies are suffering across the world and there is a good chance BiG tech will cripple the startups so it will end up being on hands of few(hopefully not).
 

Google Bets $40 Billion on Texas Buildout of U.S. AI Infrastructure​

Google announced its largest-ever investment in any U.S. state on Friday, committing $40 billion to Texas through 2027 to add three new data center campuses and make the Lone Star State a centerpiece of its global AI data center footprint.

The announcement came at Google’s data center facility in Midlothian, where Alphabet and Google CEO Sundar Pichai joined Gov. Greg Abbott, U.S. Deputy Secretary of Energy James Danly, U.S. Rep. Jake Ellzey, and Amanda Peterson Corio, Google’s global head of data center energy.

“We are making a new $40 billion investment in the Lone Star State,” Pichai said at the event. “This includes plans for three new data center campuses, one in Armstrong County and two in Haskell County.”

The investment bundles cloud and artificial intelligence infrastructure, energy capacity initiatives, continued buildout in North Texas, and workforce training programs.

“This is a Texas-sized investment in the future of our great state,” Abbott said. “Texas is the epicenter of AI development, where companies can pair innovation with expanding energy.” The investment, he said, is “Google’s largest investment in any state in the country and supports energy efficiency and workforce development in our state.”

Google makes Texas its AI infrastructure anchor
“Texas will be the centerpiece for AI data centers for Google,” said Abbott.

Pichai explained why Texas was selected for the massive investment. “Data centers of that scale require a few things: good, pro-innovation regulatory environments, land, and especially energy,” he said in Midlothian. “Happily, we found all three in Texas.”

Thanking Abbott for “leading the way,” Pichai added, “They say everything is bigger in Texas, and I think that applies to the golden opportunity with AI, the optimism, the talent, the policy environment, and the innovation needed to lead this new era and create immense benefits for everyone.”

The multi-billion-dollar investment will “create thousands of jobs, provide skills training to college students and electrical apprentices, and accelerate energy affordability initiatives throughout Texas,” he said.

The three new West Texas data centers expand Google’s data center footprint, which already includes Ellis County campuses in Midlothian and Red Oak. Google has maintained a Texas presence for more than 15 years, Pichai said, with thousands of employees across the state and offices in Dallas, Austin, and Houston.

Net-positive power additions to the grid and Google’s “first industrial park”
Energy strategy emerged as a central theme of the announcement, with Google committing to be a net-positive contributor to the Texas grid as it builds out new AI infrastructure.

“When we invest in data centers, part of our core strategy is to invest significantly in new energy capacity, which increases supply and ensures grid abundance for everyone,” Pichai said from the podium. “In Texas, we work with local utility partners to add more than 6,200 megawatts of net new energy generation and capacity to the grid to keep costs low.”

According to Google’s news release, the company has power purchase agreements with energy developers such as AES Corporation, Enel North America, Clearway, ENGIE, Intersect, SB Energy, Ørsted, and X-Elio.

Abbott noted Google has provided a net new addition of five gigawatts of power to the Texas grid so far — and committed to be net positive to the power grid as the build continues, “making sure we have the reliability and the supply that Texas needs to keep the lights on for all of our fellow Texans.”

In terms of energy scale, 5 gigawatts is equivalent to 5,000 megawatts.

One of the new Haskell County data centers will be built directly alongside a new solar and battery energy storage plant. Pichai said Google’s first industrial park, developed through a partnership with Intersect Power, will be co-located at the Haskell data center. “Co-locating energy supply with data center load has some powerful benefits. It can unlock the development of new transmission infrastructure and optimize utilization of the existing grid,” he said.

Google announced its industrial park partnership with Intersect and TPG Rise Climate last year, described as a strategic collaboration to develop “powered land” to allow the data center to run mostly on carbon-free electricity produced nearby.

U.S. Deputy Secretary of Energy on AI’s power demands
James Danly, the U.S. Deputy Secretary of Energy and the second-highest official at the Department of Energy, used the Midlothian announcement to zoom out to the national stakes of AI’s rising power needs.

“We’re going to need 100 gigawatts over the next number of years in order to satisfy the demand as we projected,” he said, noting the investment comes “at the same time as the President’s agenda focuses on ensuring safe, reliable, affordable, secure power for everybody, in abundance.”

In the company’s news release, Danly framed Google’s move as directly tied to that agenda. “This historic investment from Google advances this administration’s goal of winning the AI race,” he said in a statement. “Google is working to sustain and enhance America’s global AI dominance, economic competitiveness and national security while ensuring that energy remains reliable, affordable and secure.”

Danly also pointed to Texas policy as a key enabler of projects on this scale. “This is a state that has ensured energy abundance for everybody. It has made it possible to deploy new resources rapidly,” he said. “This is a model for how such projects should be done going forward, and the United States is going to be put in a very good position as a result of such projects.”

Pichai also tied the announcement to federal policy and thanked the administration. “We won’t be here without the leadership of President Trump and the White House AI action plan,” he said.

 
Google boss warns 'no company is going to be immune' if AI bubble bursts

Every company would be affected if the AI bubble were to burst, the head of Google's parent firm Alphabet has told the BBC.

Speaking exclusively to BBC News, Sundar Pichai said while the growth of artificial intelligence (AI) investment had been an "extraordinary moment", there was some "irrationality" in the current AI boom.

It comes amid fears in Silicon Valley and beyond of a bubble as the value of AI tech companies has soared in recent months and companies spend big on the burgeoning industry.

Asked whether Google would be immune to the impact of the AI bubble bursting, Mr Pichai said the tech giant could weather that potential storm, but also issued a warning.

"I think no company is going to be immune, including us," he said.

In a wide-ranging exclusive interview at Google's California headquarters, he also addressed energy needs, slowing down climate targets, UK investment, the accuracy of his AI models, and the effect of the AI revolution on jobs.

The interview comes as scrutiny on the state of the AI market has never been more intense.

Alphabet shares have doubled in value in seven months to $3.5tn (£2.7tn) as markets have grown more confident in the search giant's ability to fend off the threat from ChatGPT owner OpenAI.

A particular focus is Alphabet's development of specialised superchips for AI that compete with Nvidia, run by Jensen Huang, which recently reached a world first $5tn valuation.

As valuations rise, some analysts have expressed scepticism about a complicated web of $1.4tn of deals being done around OpenAI, which is expected to have revenues this year of less than one thousandth of the planned investment.

In comments echoing those made by US Federal Reserve chairman Alan Greenspan in 1996, warning of "irrational exuberance" in the market during the dotcom boom and well ahead of that market crashing in 2000, Mr Pichai said the industry can "overshoot" in investment cycles like this.

"We can look back at the internet right now. There was clearly a lot of excess investment, but none of us would question whether the internet was profound," he said.

"I expect AI to be the same. So I think it's both rational and there are elements of irrationality through a moment like this."

His comments follow a warning from Jamie Dimon, the boss of US bank JP Morgan, who told the BBC last month that investment in AI would pay off, but some of the money poured into the industry would "probably be lost".

But Mr Pichai said Google's unique model of owning its own "full stack" of technologies - from chips to YouTube data, to models and frontier science - meant it was in a better position to ride out any AI market turbulence.

The tech giant is also expanding its footprint in the UK. In September, Alphabet announced it was investing in UK artificial intelligence, committing £5bn to infrastructure and research over the next two years.

Mr Pichai said Alphabet will develop "state of the art" research work in the UK including at its key AI unit DeepMind, based in London.

For the first time, he said Google would "over time" take a step that is being pushed for in government to "train our models" in the UK - a move that cabinet ministers believe would cement the UK as the number three AI "superpower" after the US and China.

"We are committed to investing in the UK in a pretty significant way," Mr Pichai said.

However, he also warned about the "immense" energy needs of AI, which made up 1.5% of the world's electricity consumption last year, according to the International Energy Agency.

Mr Pichai said action was needed, including in the UK, to develop new sources of energy and scale up energy infrastructure.

"You don't want to constrain an economy based on energy, and I think that will have consequences," he said.

He also acknowledged that the intensive energy needs of its expanding AI venture meant there was slippage on the company's climate targets, but insisted Alphabet still had a target of achieving net zero by 2030 by investing in new energy technologies.

"The rate at which we were hoping to make progress will be impacted," he said.

AI will also affect work as we know it, Mr Pichai said, calling it "the most profound technology" humankind had worked on.

"We will have to work through societal disruptions," he said, adding that it would also "create new opportunities".

"It will evolve and transition certain jobs, and people will need to adapt," he said. Those who do adapt to AI "will do better".

"It doesn't matter whether you want to be a teacher [or] a doctor. All those professions will be around, but the people who will do well in each of those professions are people who learn how to use these tools."

BBC
 

Google boss says trillion-dollar AI investment boom has 'elements of irrationality'​

Every company would be affected if the AI bubble were to burst, the head of Google's parent firm Alphabet has told the BBC.

Speaking exclusively to BBC News, Sundar Pichai said while the growth of artificial intelligence (AI) investment had been an "extraordinary moment", there was some "irrationality" in the current AI boom.

It comes amid fears in Silicon Valley and beyond of a bubble as the value of AI tech companies has soared in recent months and companies spend big on the burgeoning industry.

Asked whether Google would be immune to the impact of the AI bubble bursting, Mr Pichai said the tech giant could weather that potential storm, but also issued a warning.

"I think no company is going to be immune, including us," he said.

In a wide-ranging exclusive interview at Google's California headquarters, he also addressed energy needs, slowing down climate targets, UK investment, the accuracy of his AI models, and the effect of the AI revolution on jobs.

The interview comes as scrutiny on the state of the AI market has never been more intense.

Alphabet shares have doubled in value in seven months to $3.5tn (£2.7tn) as markets have grown more confident in the search giant's ability to fend off the threat from ChatGPT owner OpenAI.

A particular focus is Alphabet's development of specialised superchips for AI that compete with Nvidia, run by Jensen Huang, which recently reached a world first $5tn valuation.

As valuations rise, some analysts have expressed scepticism about a complicated web of $1.4tn of deals being done around OpenAI, which is expected to have revenues this year of less than one thousandth of the planned investment.

It has raised fears stock markets are heading for a repeat of the dotcom boom and bust of the late 1990s. This saw the values of early internet companies surge amid a wave of optimism for what was then a new technology, before the bubble burst in early 2000 and many share prices collapsed.

This led to some companies going bust, resulting in job losses. A drop in share prices can also hit the value of people's savings including their pension funds.

In comments echoing those made by US Federal Reserve chairman Alan Greenspan in 1996, warning of "irrational exuberance" in the market well ahead of the dotcom crash, Mr Pichai said the industry can "overshoot" in investment cycles like this.

"We can look back at the internet right now. There was clearly a lot of excess investment, but none of us would question whether the internet was profound," he said.

"I expect AI to be the same. So I think it's both rational and there are elements of irrationality through a moment like this."

His comments follow a warning from Jamie Dimon, the boss of US bank JP Morgan, who told the BBC last month that investment in AI would pay off, but some of the money poured into the industry would "probably be lost".

But Mr Pichai said Google's unique model of owning its own "full stack" of technologies - from chips to YouTube data, to models and frontier science - meant it was in a better position to ride out any AI market turbulence.

The tech giant is also expanding its footprint in the UK. In September, Alphabet announced it was investing in UK artificial intelligence, committing £5bn to infrastructure and research over the next two years.

Mr Pichai said Alphabet will develop "state of the art" research work in the UK including at its key AI unit DeepMind, based in London.

For the first time, he said Google would "over time" take a step that is being pushed for in government to "train our models" in the UK - a move that cabinet ministers believe would cement the UK as the number three AI "superpower" after the US and China.

"We are committed to investing in the UK in a pretty significant way," Mr Pichai said.

However, he also warned about the "immense" energy needs of AI, which made up 1.5% of the world's electricity consumption last year, according to the International Energy Agency.

Mr Pichai said action was needed, including in the UK, to develop new sources of energy and scale up energy infrastructure.

"You don't want to constrain an economy based on energy, and I think that will have consequences," he said.

He also acknowledged that the intensive energy needs of its expanding AI venture meant there was slippage on the company's climate targets, but insisted Alphabet still had a target of achieving net zero by 2030 by investing in new energy technologies.

"The rate at which we were hoping to make progress will be impacted," he said.

AI will also affect work as we know it, Mr Pichai said, calling it "the most profound technology" humankind had worked on.

"We will have to work through societal disruptions," he said, adding that it would also "create new opportunities".

"It will evolve and transition certain jobs, and people will need to adapt," he said. Those who do adapt to AI "will do better".

"It doesn't matter whether you want to be a teacher [or] a doctor. All those professions will be around, but the people who will do well in each of those professions are people who learn how to use these tools."

Source: BBC
 

Yann LeCun to leave Meta, launch AI startup focused on Advanced Machine Intelligence​


Nov 19 (Reuters) - Yann LeCun, one of the founding figures of modern artificial intelligence and a pivotal force at Meta Platforms (META.O), opens new tab, said on Wednesday he plans to leave the company at the end of the year to launch a new AI startup.

LeCun has been a key part of Meta's artificial intelligence ambitions for more than a decade. He joined the company in 2013 to create Facebook AI Research (FAIR), the in-house lab that helped transform Meta into one of the AI leaders.

Over 12 years, he served five as FAIR's founding director and seven as the company's chief AI scientist, guiding breakthroughs in deep learning, computer vision and large-scale language modeling that underpin products like Instagram recommendations and Meta's generative AI systems.

He developed an early form of an artificial neural network that mimicked how the human eye and brain process images — technology that later became the backbone of modern image recognition and GenAI.

LeCun, 65, said his new venture will pursue Advanced Machine Intelligence (AMI) research — a project he has developed in collaboration with colleagues at FAIR and New York University, where he teaches.

The computer scientist said he will provide more details on his new firm at a later date, but added that Meta will be a partner in the venture, reflecting what he called the company's "continued interest and support" for AMI's long-term goals.

"The creation of FAIR is my proudest non-technical accomplishment," he wrote. "The impact of FAIR on Meta, the AI field, and the wider world has been spectacular."

Meta CEO Mark Zuckerberg and CTO Andrew Bosworth have both credited LeCun with laying the foundations for Meta's current AI infrastructure, including its open-source Llama models that have become a cornerstone of the global AI research community.

LeCun is widely regarded as one of the "godfathers" of deep learning, alongside Geoffrey Hinton and Yoshua Bengio — a trio that won the 2018 Turing Award, often called the Nobel Prize of computing.

Source: REUTERS
 

Yann LeCun to leave Meta, launch AI startup focused on Advanced Machine Intelligence​


Nov 19 (Reuters) - Yann LeCun, one of the founding figures of modern artificial intelligence and a pivotal force at Meta Platforms (META.O), opens new tab, said on Wednesday he plans to leave the company at the end of the year to launch a new AI startup.

LeCun has been a key part of Meta's artificial intelligence ambitions for more than a decade. He joined the company in 2013 to create Facebook AI Research (FAIR), the in-house lab that helped transform Meta into one of the AI leaders.

Over 12 years, he served five as FAIR's founding director and seven as the company's chief AI scientist, guiding breakthroughs in deep learning, computer vision and large-scale language modeling that underpin products like Instagram recommendations and Meta's generative AI systems.

He developed an early form of an artificial neural network that mimicked how the human eye and brain process images — technology that later became the backbone of modern image recognition and GenAI.

LeCun, 65, said his new venture will pursue Advanced Machine Intelligence (AMI) research — a project he has developed in collaboration with colleagues at FAIR and New York University, where he teaches.

The computer scientist said he will provide more details on his new firm at a later date, but added that Meta will be a partner in the venture, reflecting what he called the company's "continued interest and support" for AMI's long-term goals.

"The creation of FAIR is my proudest non-technical accomplishment," he wrote. "The impact of FAIR on Meta, the AI field, and the wider world has been spectacular."

Meta CEO Mark Zuckerberg and CTO Andrew Bosworth have both credited LeCun with laying the foundations for Meta's current AI infrastructure, including its open-source Llama models that have become a cornerstone of the global AI research community.

LeCun is widely regarded as one of the "godfathers" of deep learning, alongside Geoffrey Hinton and Yoshua Bengio — a trio that won the 2018 Turing Award, often called the Nobel Prize of computing.

Source: REUTERS
Don’t like Yann Lecun. Meta is chasing AGI and ASI and Mr. Yann says that AGI will never be achieved in his interviews. He clearly does not believe in the goals of Meta. He can start his own AI start up. Some have already done that after they left OpenAI. I am talking about Ilya Sutskever. .
 
7 useful prompts to learn anything from ChatGPT: :inti

1. Specify output format
When assigning a question to ChatGPT, you can specify how the reply is formatted.
For instance: "What are the longest highways in the United States? List the top four as bullet points."

2. Explain like I'm a beginner.
Prompt: Explain [topic] in simple terms. Explain to me as if I'm a beginner.

3. Fictional World Creation
Prompt: "Create a detailed fictional world where magic is commonplace, technology is advanced, and societies are governed by sentient AI. Describe the major factions, conflicts, and cultural nuances within this world."

4. AI-Generated Art Critique
Prompt: Analyze and critique a piece of artwork generated by an advanced AI algorithm. Discuss its originality, emotional impact, and artistic merit, considering the role of AI in shaping contemporary art.

5. Quiz yourself
You learned a topic now use ChatGpt to quiz yourself and become more confident on the topics.
Prompt: Give me a short quiz that teaches me [what you want to learn]

6. Projects
Get awesome project ideas from ChatGpt
Prompt:
I am a beginner interested in .... To do this I need to know how to ..... Can you give me some beginner project ideas I could work on to strengthen my skills....

7. Political Speech Writing
Prompt: "Draft a compelling political speech advocating for comprehensive healthcare reform, addressing key issues such as access, affordability, and quality of care. Appeal to both emotions and logic to rally public support."

Source: Altiam Kabir (Facebook).
 
Meta buried 'causal' evidence of social media harm, US court filings allege

Meta shut down internal research into the mental health effects of Facebook after finding causal evidence that its products harmed users’ mental health, according to unredacted filings in a lawsuit by U.S. school districts against Meta and other social media platforms.

Make sense of the latest ESG trends affecting companies and governments with the Reuters Sustainable Switch newsletter. Sign up here.

In a 2020 research project code-named “Project Mercury,” Meta (META.O), opens new tab scientists worked with survey firm Nielsen to gauge the effect of “deactivating” Facebook, according to Meta documents obtained via discovery. To the company’s disappointment, “people who stopped using Facebook for a week reported lower feelings of depression, anxiety, loneliness and social comparison,” internal documents said.

Rather than publishing those findings or pursuing additional research, the filing states, Meta called off further work and internally declared that the negative study findings were tainted by the “existing media narrative” around the company.

Privately, however, a staffer insisted that the conclusions of the research were valid, according to the filing.

“The Nielsen study does show causal impact on social comparison,” (unhappy face emoji), an unnamed staff researcher allegedly wrote. Another staffer worried that keeping quiet about negative findings would be akin to the tobacco industry “doing research and knowing cigs were bad and then keeping that info to themselves.”

Despite Meta’s own work documenting a causal link between its products and negative mental health effects, the filing alleges, Meta told Congress that it had no ability to quantify whether its products were harmful to teenage girls.

In a statement Saturday, Meta spokesman Andy Stone said the study was stopped because its methodology was flawed and that it worked diligently to improve the safety of its products.

“The full record will show that for over a decade, we have listened to parents, researched issues that matter most, and made real changes to protect teens,” he said.

PLAINTIFFS ALLEGE PRODUCT RISKS WERE HIDDEN

The allegation of Meta burying evidence of social media harms is just one of many in a late Friday filing by Motley Rice, a law firm suing Meta, Google (GOOGL.O), opens new tab, TikTok and Snapchat (SNAP.N), opens new tab on behalf of school districts around the country. Broadly, the plaintiffs argue the companies have intentionally hidden the internally recognized risks of their products from users, parents and teachers.

TikTok, Google and Snapchat did not immediately respond to a request for comment.

Allegations against Meta and its rivals include tacitly encouraging children below the age of 13 to use their platforms, failing to address child sexual abuse content and seeking to expand the use of social media products by teenagers while they were at school. The plaintiffs also allege that the platforms attempted to pay child-focused organizations to defend the safety of their products in public.

In one instance, TikTok sponsored the National PTA and then internally boasted about its ability to influence the child-focused organization. Per the filing, TikTok officials said the PTA would “do whatever we want going forward in the fall… (t)hey’ll announce things publicly(,), (t)heir CEO will do press statements for us.”

By and large, however, the allegations against the other social media platforms are less detailed than those against Meta. The internal documents cited by the plaintiffs allege:

  1. Meta intentionally designed its youth safety features to be ineffective and rarely used, and blocked testing of safety features that it feared might be harmful to growth.
  2. Meta required users to be caught 17 times attempting to traffic people for sex before it would remove them from its platform, which a document described as “a very, very, very high strike threshold."
  3. Meta recognized that optimizing its products to increase teen engagement resulted in serving them more harmful content, but did so anyway.
  4. Meta stalled internal efforts to prevent child predators from contacting minors for years due to growth concerns, and pressured safety staff to circulate arguments justifying its decision not to act.
  5. In a text message in 2021, Mark Zuckerberg said that he wouldn’t say that child safety was his top concern “when I have a number of other areas I’m more focused on like building the metaverse.” Zuckerberg also shot down or ignored requests by Nick Clegg, Meta's then-head of global public policy, to better fund child safety work.
Meta’s Stone disputed these allegations, saying the company’s teen safety measures are effective and that the company’s current policy is to remove accounts as soon as they are flagged for sex trafficking.

He said the suit misrepresents its efforts to build safety features for teens and parents, and called its safety work “broadly effective.”

"We strongly disagree with these allegations, which rely on cherry-picked quotes and misinformed opinions,” Stone said.

The underlying Meta documents cited in the filing are not public, and Meta has filed a motion to strike the documents. Stone said the objection was to the over-broad nature of what plaintiffs are seeking to unseal, not unsealing in its entirety.

A hearing regarding the filing is set for January 26 in Northern California District Court.

 

Developers can now submit apps to ChatGPT​

We’re opening app submissions for review and publication in ChatGPT, and users can discover apps in the app directory.

Earlier this year at DevDay, we introduced apps in ChatGPT. Starting today, developers can submit apps for review and publication in ChatGPT by following our app submission guidelines⁠(opens in a new window). Apps extend ChatGPT conversations by bringing in new context and letting users take actions like order groceries, turn an outline into a slide deck, or search for an apartment. We’ve published resources to help developers build high-quality apps that users will love—based on what we’ve learned since DevDay—like best practices on what makes a great ChatGPT app⁠(opens in a new window), open-source example apps⁠(opens in a new window), an open-sourced UI library⁠(opens in a new window) for chat-native interfaces, and a step-by-step quickstart guide⁠(opens in a new window).

We’re also introducing an app directory right inside ChatGPT, where users can browse featured apps or search for any published app. The app directory is discoverable from the tools menu or directly from chatgpt.com/apps. Developers can also use deep links on other platforms to send users right to their app page in the directory.

Once users connect to apps, apps can get triggered during conversations when @ mentioned by name, or when selected from the tools menu. We’re also experimenting with ways to surface relevant, helpful apps directly within conversations—using signals like conversational context, app usage patterns, and user preferences—and giving users clear ways to provide feedback.

Building, submitting and monetizing apps​

Building a great ChatGPT app starts with designing for real user intent. Developers can use the Apps SDK—now in beta—to build chat-native experiences that bring context and action directly into ChatGPT. The strongest apps are tightly scoped, intuitive in chat, and deliver clear value by either completing real-world workflows that start in conversation or enabling new, fully AI-native experiences inside ChatGPT. We recommend reviewing the app submission guidelines⁠(opens in a new window) early to help you build a high-quality app. Additional documentation and examples are available in the developer resource hub⁠(opens in a new window).

Once ready, developers can submit apps for review and track approval status in the OpenAI Developer Platform⁠(opens in a new window). Submissions include MCP connectivity details, testing guidelines, directory metadata, and country availability settings. The first set of approved apps will begin rolling out gradually in the new year. Apps that meet our quality and safety standards are eligible to be published in the app directory, and apps that resonate with users may be featured more prominently in the directory or recommended by ChatGPT in the future.

In this early phase, developers can link out from their ChatGPT apps to their own websites or native apps to complete transactions for physical goods. We’re exploring additional monetization options over time, including digital goods, and will share more as we learn from how developers and users build and engage.

Safety and privacy​

All developers are required to follow the app submission guidelines⁠(opens in a new window) around safety, privacy, and transparency. Apps must comply with OpenAI’s usage policies, be appropriate for all audiences, and adhere to third-party terms of service when accessing their content. Developers must include clear privacy policies with every app submission and we require developers to only request the information needed to make their apps work.

When a user connects to a new app, we will disclose what types of data may be shared with the third party and provide the app’s privacy policy for review. And users are always in control: disconnect an app at any time, and it immediately loses access.

Looking ahead​

This is just the beginning. Over time, we want apps in ChatGPT to feel like a natural extension of the conversation, helping people move from ideas to action, while building a thriving ecosystem for developers. As we learn from developers and users, we’ll continue refining the experience for everyone. We also plan to grow the ecosystem of apps in ChatGPT, make apps easier to discover, and expand the ways developers can reach users and monetize their work.

Source: https://openai.com/index/developers-can-now-submit-apps-to-chatgpt/.
 
UK to bring into force law to tackle Grok AI deepfakes this week

The UK will bring into force a law which will make it illegal to create non-consensual intimate images, following widespread concerns over Elon Musk's Grok AI chatbot.

The Technology Secretary Liz Kendall said the government would also seek to make it illegal for companies to supply the tools designed to create such images.

Speaking to the Commons, Kendall said AI-generated pictures of women and children in states of undress, created without a person's consent, were not "harmless images" but "weapons of abuse".

The BBC has approached X for comment. It previously said: "Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.".

It comes hours after Ofcom announced it was launching an investigation into X over "deeply concerning reports" about Grok altering images of people.

In a statement, Kendall urged the regulator not to take "months and months" to conclude its investigation, and demanded it set out a timeline "as soon as possible".

It is currently illegal to share deepfakes of adults in the UK, but until now legislation which would make it a criminal offence to create or request them has not been enforced, despite passing in June 2025.

Kendall said she would also make it a "priority offence" in the Online Safety Act.

"The content which has circulated on X is vile. It's not just an affront to decent society, it is illegal," she said.

"Let me be crystal clear - under the Online Safety Act, sharing intimate images of people without their consent, or threatening to share them, including pictures of people in their underwear, is a criminal offence for individuals and for platforms.

"This means individuals are committing a criminal offence if they create or seek to create such content including on X, and anyone who does this should expect to face the full extent of the law."


 
As expected Anthropic is improving exponentially with its models, Haiku 4.5 has been extremely useful to me but have to say unless i can use it to create all my flows or agents I would probably not be able to move to next level.

Unfortunate time for many in coding and analyst fields though, don’t see how SAAS would employ so many folks with that much automation available.
 
I don’t know why Chinese do this, they are capable of being the best but the resort to this petty nonsense

—————

Anthropic Says Chinese AI Firms Used 16 Million Claude Queries to Copy Model


Anthropic on Monday said it identified "industrial-scale campaigns" mounted by three artificial intelligence (AI) companies, DeepSeek, Moonshot AI, and MiniMax, to illegally extract Claude's capabilities to improve their own models.


 
A huge day in American tech world and a massive shake up.

THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military.

The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution. Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY.

Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again! There will be a Six Month phase out period for Agencies like the Department of War who are using Anthropic’s products, at various levels. Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow.

WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about. Thank you for your attention to this matter. MAKE AMERICA GREAT AGAIN!

PRESIDENT DONALD J. TRUMP

https://truthsocial.com/@realDonaldTrump/116144552969293195





 
Open Ai isn’t able to keep up with Anthropic on productivity levels required in real world.. but definitely short term gain for Open AI and regulation hell incoming for Anthropic.
 
If Anthropic survives this and the Chinese attacks, they will make it big.

Mira should had deflected to Anthropic , her ego and ability to not realize whats at stake will destroy countless $$
 
Interesting article below. Those who have even some basic tech knowledge can realize how big a deal Cursor is in the world of IDEs these days. So Cursor switching to a Chinese model is major shift in the powers of tech landscape.

What I find interesting from the article below is that China, a supposed closed govt is producing the most open, free, and powerful models while US/EU, the supposed open democracies are only churning out closed source paid models. I feel there is a slow and steady shift of powers in the AI software side from western nations to China. This is not happening in hardware yet mind you because US has control of TSMC chips.


News source -- https://venturebeat.com/technology/...-built-on-a-chinese-ai-model-and-it-exposes-a

---------------------------------------------------------------------------------------------------------------------------

The $29.3 billion AI coding tool just got caught with its provenance showing. When Cursor launched Composer 2 last week — calling it "frontier-level coding intelligence" — it presented the model as evidence that the company is a serious AI research lab, not just a forked integrated development environment (IDE) wrapping someone else's foundation model. What the announcement omitted was that Composer 2 was built on top of Kimi K2.5, an open-source model from Moonshot AI, a Chinese startup backed by Alibaba, Tencent and HongShan (the firm formerly known as Sequoia China).

A developer named Fynn (@fynnso) on X figured it out within hours. By setting up a local debug proxy server and routing Cursor's API traffic through it, Fynn intercepted the outbound request and found the model ID in plain sight: accounts/anysphere/models/kimi-k2p5-rl-0317-s515-fast.



"So composer 2 is just Kimi K2.5 with RL," Fynn wrote. "At least rename the model ID." The post racked up 2.6 million views.

In a follow-up, Fynn noted that Cursor's previous model, Composer 1.5, blocked this kind of request interception — but Composer 2 did not, calling it "probably an oversight." Cursor quickly patched it, but the fact was clearly out.

Cursor's VP of Developer Education, Lee Robinson, confirmed the Kimi connection within hours, and co-founder Aman Sanger acknowledged it was a mistake not to disclose the base model from the start.


But the story that matters here is not about one company's disclosure failure. It is about why Cursor — and likely many other AI product companies — turned to a Chinese open model in the first place.

The open-model vacuum: Why Western companies keep reaching for Chinese foundations

Cursor's decision to build on Kimi K2.5 was not random. The model is a 1 trillion parameter mixture-of-experts architecture with 32 billion active parameters, a 256,000-token context window, native image and video support, and an Agent Swarm capability that runs up to 100 parallel sub-agents simultaneously.

Released under a modified MIT license that permits commercial use, Kimi K2.5 is competitive with the best models in the world on agentic benchmarks and scored first among all models on MathVista at release.

When an AI product company needs a strong open model for continued pretraining and reinforcement learning — the kind of deep customization that turns a foundation into a differentiated product — the options from Western labs have been surprisingly thin.

Meta's Llama 4 Scout and Maverick shipped in April 2025, but they were severely lacking, and the much-anticipated Llama 4 Behemoth has been indefinitely delayed. As of March 2026, Behemoth still has no public release date, with reports suggesting Meta's internal teams are not convinced the 2-trillion-parameter model delivers enough of a performance leap to justify shipping it.

Google's Gemma 3 family topped out at 27 billion parameters — excellent for edge and single-accelerator deployment, but not a frontier-class foundation for building production coding agents. Gemma 4 has yet to be announced, though it has sparked speculation that a release may be imminent.

And then there's OpenAI, which released arguably the most conspicuous American open source contender, the gpt-oss family (in 20-billion and 120-billion parameter variants) in August 2025. Why wouldn't Cursor build atop this model if it needed a base model to fine-tune?

The answer lies in the "intelligence density" required for frontier-class coding. While gpt-oss-120b is a monumental achievement for Western open source—offering reasoning capabilities that rival proprietary models like o4-mini—it is fundamentally a sparse Mixture-of-Experts (MoE) model that activates only 5.1 billion parameters per token. For a general-purpose reasoning assistant, that is an efficiency masterstroke; for a tool like Composer 2, which must maintain structural coherence across a 256,000-token context window, it is arguably too "thin." By contrast, Kimi K2.5 is a 1-trillion-parameter titan that keeps 32 billion parameters active at any given moment. In the high-stakes world of agentic coding, sheer cognitive mass still dictates performance, and Cursor clearly calculated that Kimi’s 6x advantage in active parameter count was essential for synthesizing the "context explosion" that occurs during complex, multi-step autonomous programming tasks.

Beyond raw scale, there is the matter of structural resilience. OpenAI’s open-weight models have gained a quiet reputation among elite developer circles for being "post-training brittle"—models that are brilliant out of the box but prone to catastrophic forgetting when subjected to the kind of aggressive, high-compute reinforcement learning Cursor required.

Cursor didn't just apply a light fine-tune; they executed a "4x scale-up" in training compute to bake in their proprietary self-summarization logic. Kimi K2.5, built specifically for agentic stability and long-horizon tasks, provided a more durable "chassis" for these deep architectural renovations. It allowed Cursor to build a specialized agent that could solve competition-level problems, like compiling the original Doom for a MIPS architecture, without the model's core logic collapsing under the weight of its own specialized training.

That leaves a gap. And Chinese labs — Moonshot, DeepSeek, Qwen, and others — have filled it aggressively. DeepSeek's V3 and R1 models caused a panic in Silicon Valley in early 2025 by matching frontier performance at a fraction of the cost. Alibaba's Qwen3.5 family has shipped models at nearly every parameter count from 600 million to 397 billion active parameters. Kimi K2.5 sits squarely in the sweet spot for companies that want a powerful, open, customizable base.

Cursor is not the only product company in this position. Any enterprise building specialized AI applications on top of open models today confronts the same calculus: the most capable, most permissively licensed open foundations disproportionately come from Chinese labs.

What Cursor actually built — and why the base model matters less than you think

To its credit, Cursor did not just slap a UI on Kimi. Lee Robinson stated that roughly a quarter of the total compute used to build Composer 2 came from the Kimi base, with the remaining three quarters from Cursor's own continued training. The company's technical blog post describes a technique called self-summarization that addresses one of the hardest problems in agentic coding: context overflow during long-running tasks.

When an AI coding agent works on complex, multi-step problems, it generates far more context than any model can hold in memory at once. The typical workaround — truncating old context or using a separate model to summarize it — causes the agent to lose critical information and make cascading errors. Cursor's approach trains the model itself to compress its own working memory in the middle of a task, as part of the reinforcement learning process. When Composer 2 nears its context limit, it pauses, compresses everything down to roughly 1,000 tokens, and continues. Those summaries are rewarded or penalized based on whether they helped complete the overall task, so the model learns what to retain and what to discard over thousands of training runs.

The results are meaningful. Cursor reports that self-summarization cuts compaction errors by 50 percent compared to heavily engineered prompt-based baselines, using one-fifth the tokens. As a demonstration, Composer 2 solved a Terminal-Bench problem — compiling the original Doom game for a MIPS processor architecture — in 170 turns, self-summarizing over 100,000 tokens repeatedly across the task. Several frontier models cannot complete it. On CursorBench, Composer 2 scores 61.3 compared to 44.2 for Composer 1.5, and reaches 61.7 on Terminal-Bench 2.0 and 73.7 on SWE-bench Multilingual.

Moonshot AI itself responded supportively after the story broke, posting on X that it was proud to see Kimi provide the foundation and confirming that Cursor accessed the model through an authorized commercial partnership with Fireworks AI, a model hosting company. Nothing was stolen. The use was commercially licensed.

Beyond attribution: The silence raises licensing and governance questions

Cursor co-founder Aman Sanger acknowledged the omission, saying it was a miss not to mention the Kimi base in the original blog post. The reasons for that silence are not hard to infer. Cursor is valued at nearly $30 billion on the premise that it is an AI research company, not an integration layer. And Kimi K2.5 was built by a Chinese company backed by Alibaba — a sensitive provenance at a moment when the US-China AI relationship is strained and government and enterprise customers increasingly care about supply chain origins.

The real lesson is broader. The whole industry builds on other people's foundations. OpenAI's models are trained on decades of academic research and internet-scale data. Meta's Llama is trained on data it does not always fully disclose. Every model sits atop layers of prior work. The question is what companies say about it — and right now, the incentive structure rewards obscuring the connection, especially when the foundation comes from China.

For IT decision-makers evaluating AI coding tools and agent platforms, this episode surfaces practical questions: do you know what's under the hood of your AI vendor's product? Does it matter for your compliance, security, and supply chain requirements? And is your vendor meeting the license obligations of its own foundation model?

The Western open-model gap is starting to close — but slowly

The good news for enterprises concerned about model provenance is that it does seem Western open models are about to get significantly more competitive. NVIDIA has been on an aggressive release cadence. Nemotron 3 Super, released on March 11, is a 120-billion-parameter hybrid Mamba-Transformer model with 12 billion active parameters, a 1-million-token context window, and up to 5x higher throughput than its predecessor. It uses a novel latent mixture-of-experts architecture and was pre-trained in NVIDIA's NVFP4 format on the Blackwell architecture. Companies including Perplexity, CodeRabbit, Factory, and Greptile are already integrating it into their AI agents.

Days later, NVIDIA followed with Nemotron-Cascade 2, a 30-billion-parameter MoE model with just 3 billion active parameters that outperforms both Qwen 3.5-35B and the larger Nemotron 3 Super across mathematics, code reasoning, alignment, and instruction-following benchmarks. Cascade 2 achieved gold-medal-level performance on the 2025 International Mathematical Olympiad, the International Olympiad in Informatics, and the ICPC World Finals — making it only the second open-weight model after DeepSeek-V3.2-Speciale to accomplish that. Both models ship with fully open weights, training datasets, and reinforcement learning recipes under permissive licenses — exactly the kind of transparency that Cursor's Kimi episode highlighted as missing.

What IT leaders should watch: The provenance question is not going away

The Cursor-Kimi episode is a preview of a recurring pattern. As AI product companies increasingly build differentiated applications through continued pretraining, reinforcement learning, and novel techniques like self-summarization on top of open foundation models, the question of which foundation sits at the bottom of the stack becomes a matter of enterprise governance — not just technical preference.

NVIDIA's Nemotron family and the anticipated Gemma 4 represent the strongest near-term candidates for closing the Western open-model gap. Nemotron 3 Super's hybrid architecture and million-token context window make it directly relevant for the same agentic coding use cases that Cursor addressed with Kimi. Cascade 2's extraordinary intelligence density — gold-medal competition performance at just 3 billion active parameters — suggests that smaller, highly optimized models trained with advanced RL techniques can increasingly substitute for the massive Chinese foundations that have dominated the open-model landscape.

But for now, the line between American AI products and Chinese model foundations is not as clean as the geopolitical narrative suggests. One of the most-used coding tools in the world runs on a model backed by Alibaba — and may not originally have been meeting the attribution requirements of the license that enabled it. Cursor says it will disclose the base model next time. The more interesting question is whether, next time, it will have a credible Western alternative to disclose.
 
So any Indian AI software - that we should be aware of

A lot of indians did say that they would challenge US / China in this, but yet havent heard of a rival - also every indian i know all use chatgpt - one hand they say dont use chinese tech / software, but then they always use it
 
Meta says it will cut 8,000 jobs as AI spending soars

Meta will cut thousands of jobs next month as it spends more than ever on artificial intelligence (AI) projects.

The company told employees in a memo on Thursday that it planned to cut 10% of its workforce - roughly 8,000 staff. It said it would also not fill thousands more open jobs it had been hiring for.

A key reason for the layoffs is Meta's increased spending in other areas of the company, including AI, for which it will this year spend $135bn (£100bn). This is roughly equal to the amount it has spent on AI in the previous three years combined, according to a person who viewed the memo.

A spokesman for Meta confirmed the planned job cuts but declined to comment further.

Mark Zuckerberg, Meta's co-founder and chief executive, made public comments in January that essentially telegraphed the company would be cutting jobs again this year.

The Meta boss said he had seen how much more productive workers who relied heavily on AI tools had become, noting a single person could now complete projects that would have previously required a large team.

"I think that 2026 is going to be the year that AI starts to dramatically change the way that we work," Zuckerberg said.

Last week Reuters news agency reported that Meta was planning to cut potentially more than 10,000 employees this year. The memo to employees on Thursday was first reported by Bloomberg.

While Meta has already cut around 2,000 workers in two smaller rounds of layoffs already this year, employees had been braced for weeks for a much deeper cut, as the BBC previously reported.

Meta's spending and internal focus had shifted heavily in recent months toward catching up on the development of AI models and tools.

The company just this week informed employees that it would begin tracking and logging their interactions with work computers in order to help train and improve its AI models, a move one employee called "dystopian" given the looming layoffs.

"This company has become obsessed with AI," they told the BBC.

Since 2022, Meta has enacted several rounds of job cuts, shedding tens of thousands of workers.

But it had started hiring again, and last year its overall number of employees looked to be at around the same level it had been at before its initial layoff.

The upcoming jobs cuts will be Meta's largest layoff since 2023.

A number of other tech firms, most of which are also spending huge sums on building tools and infrastructure for AI technology, have also enacted swathes of job cuts this year.

Amazon has laid off more than 30,000 workers. Oracle laid off more than 10,000 workers.

Block, which is among the smaller tech companies, laid off nearly half of its staff totaling more than 4,000 workers. And Snap, another smaller tech company, has laid off around 1,000.

Also on Thursday, Microsoft told employees that it would offer thousands of workers with longer tenure at the firm voluntary buyouts.

Nearly all of the companies have cited the growing capabilities of, or increased investment in, AI technology as a factor in executives' perceived need for fewer employees.

BBC
 
Back
Top