What's new

Chat GPT and artificial intelligence

LLM have reached a certain capacity now it seems, which is good as this will stop the panic mode and finally people can start to build tools agents.

LLMs will not achieve AGI as per majority, that in hindsight does seem like will be their limitation but all LLM capabilities currently will help enhancing every industry, problem is economies are suffering across the world and there is a good chance BiG tech will cripple the startups so it will end up being on hands of few(hopefully not).
 

Google Bets $40 Billion on Texas Buildout of U.S. AI Infrastructure​

Google announced its largest-ever investment in any U.S. state on Friday, committing $40 billion to Texas through 2027 to add three new data center campuses and make the Lone Star State a centerpiece of its global AI data center footprint.

The announcement came at Google’s data center facility in Midlothian, where Alphabet and Google CEO Sundar Pichai joined Gov. Greg Abbott, U.S. Deputy Secretary of Energy James Danly, U.S. Rep. Jake Ellzey, and Amanda Peterson Corio, Google’s global head of data center energy.

“We are making a new $40 billion investment in the Lone Star State,” Pichai said at the event. “This includes plans for three new data center campuses, one in Armstrong County and two in Haskell County.”

The investment bundles cloud and artificial intelligence infrastructure, energy capacity initiatives, continued buildout in North Texas, and workforce training programs.

“This is a Texas-sized investment in the future of our great state,” Abbott said. “Texas is the epicenter of AI development, where companies can pair innovation with expanding energy.” The investment, he said, is “Google’s largest investment in any state in the country and supports energy efficiency and workforce development in our state.”

Google makes Texas its AI infrastructure anchor
“Texas will be the centerpiece for AI data centers for Google,” said Abbott.

Pichai explained why Texas was selected for the massive investment. “Data centers of that scale require a few things: good, pro-innovation regulatory environments, land, and especially energy,” he said in Midlothian. “Happily, we found all three in Texas.”

Thanking Abbott for “leading the way,” Pichai added, “They say everything is bigger in Texas, and I think that applies to the golden opportunity with AI, the optimism, the talent, the policy environment, and the innovation needed to lead this new era and create immense benefits for everyone.”

The multi-billion-dollar investment will “create thousands of jobs, provide skills training to college students and electrical apprentices, and accelerate energy affordability initiatives throughout Texas,” he said.

The three new West Texas data centers expand Google’s data center footprint, which already includes Ellis County campuses in Midlothian and Red Oak. Google has maintained a Texas presence for more than 15 years, Pichai said, with thousands of employees across the state and offices in Dallas, Austin, and Houston.

Net-positive power additions to the grid and Google’s “first industrial park”
Energy strategy emerged as a central theme of the announcement, with Google committing to be a net-positive contributor to the Texas grid as it builds out new AI infrastructure.

“When we invest in data centers, part of our core strategy is to invest significantly in new energy capacity, which increases supply and ensures grid abundance for everyone,” Pichai said from the podium. “In Texas, we work with local utility partners to add more than 6,200 megawatts of net new energy generation and capacity to the grid to keep costs low.”

According to Google’s news release, the company has power purchase agreements with energy developers such as AES Corporation, Enel North America, Clearway, ENGIE, Intersect, SB Energy, Ørsted, and X-Elio.

Abbott noted Google has provided a net new addition of five gigawatts of power to the Texas grid so far — and committed to be net positive to the power grid as the build continues, “making sure we have the reliability and the supply that Texas needs to keep the lights on for all of our fellow Texans.”

In terms of energy scale, 5 gigawatts is equivalent to 5,000 megawatts.

One of the new Haskell County data centers will be built directly alongside a new solar and battery energy storage plant. Pichai said Google’s first industrial park, developed through a partnership with Intersect Power, will be co-located at the Haskell data center. “Co-locating energy supply with data center load has some powerful benefits. It can unlock the development of new transmission infrastructure and optimize utilization of the existing grid,” he said.

Google announced its industrial park partnership with Intersect and TPG Rise Climate last year, described as a strategic collaboration to develop “powered land” to allow the data center to run mostly on carbon-free electricity produced nearby.

U.S. Deputy Secretary of Energy on AI’s power demands
James Danly, the U.S. Deputy Secretary of Energy and the second-highest official at the Department of Energy, used the Midlothian announcement to zoom out to the national stakes of AI’s rising power needs.

“We’re going to need 100 gigawatts over the next number of years in order to satisfy the demand as we projected,” he said, noting the investment comes “at the same time as the President’s agenda focuses on ensuring safe, reliable, affordable, secure power for everybody, in abundance.”

In the company’s news release, Danly framed Google’s move as directly tied to that agenda. “This historic investment from Google advances this administration’s goal of winning the AI race,” he said in a statement. “Google is working to sustain and enhance America’s global AI dominance, economic competitiveness and national security while ensuring that energy remains reliable, affordable and secure.”

Danly also pointed to Texas policy as a key enabler of projects on this scale. “This is a state that has ensured energy abundance for everybody. It has made it possible to deploy new resources rapidly,” he said. “This is a model for how such projects should be done going forward, and the United States is going to be put in a very good position as a result of such projects.”

Pichai also tied the announcement to federal policy and thanked the administration. “We won’t be here without the leadership of President Trump and the White House AI action plan,” he said.

 
Google boss warns 'no company is going to be immune' if AI bubble bursts

Every company would be affected if the AI bubble were to burst, the head of Google's parent firm Alphabet has told the BBC.

Speaking exclusively to BBC News, Sundar Pichai said while the growth of artificial intelligence (AI) investment had been an "extraordinary moment", there was some "irrationality" in the current AI boom.

It comes amid fears in Silicon Valley and beyond of a bubble as the value of AI tech companies has soared in recent months and companies spend big on the burgeoning industry.

Asked whether Google would be immune to the impact of the AI bubble bursting, Mr Pichai said the tech giant could weather that potential storm, but also issued a warning.

"I think no company is going to be immune, including us," he said.

In a wide-ranging exclusive interview at Google's California headquarters, he also addressed energy needs, slowing down climate targets, UK investment, the accuracy of his AI models, and the effect of the AI revolution on jobs.

The interview comes as scrutiny on the state of the AI market has never been more intense.

Alphabet shares have doubled in value in seven months to $3.5tn (£2.7tn) as markets have grown more confident in the search giant's ability to fend off the threat from ChatGPT owner OpenAI.

A particular focus is Alphabet's development of specialised superchips for AI that compete with Nvidia, run by Jensen Huang, which recently reached a world first $5tn valuation.

As valuations rise, some analysts have expressed scepticism about a complicated web of $1.4tn of deals being done around OpenAI, which is expected to have revenues this year of less than one thousandth of the planned investment.

In comments echoing those made by US Federal Reserve chairman Alan Greenspan in 1996, warning of "irrational exuberance" in the market during the dotcom boom and well ahead of that market crashing in 2000, Mr Pichai said the industry can "overshoot" in investment cycles like this.

"We can look back at the internet right now. There was clearly a lot of excess investment, but none of us would question whether the internet was profound," he said.

"I expect AI to be the same. So I think it's both rational and there are elements of irrationality through a moment like this."

His comments follow a warning from Jamie Dimon, the boss of US bank JP Morgan, who told the BBC last month that investment in AI would pay off, but some of the money poured into the industry would "probably be lost".

But Mr Pichai said Google's unique model of owning its own "full stack" of technologies - from chips to YouTube data, to models and frontier science - meant it was in a better position to ride out any AI market turbulence.

The tech giant is also expanding its footprint in the UK. In September, Alphabet announced it was investing in UK artificial intelligence, committing £5bn to infrastructure and research over the next two years.

Mr Pichai said Alphabet will develop "state of the art" research work in the UK including at its key AI unit DeepMind, based in London.

For the first time, he said Google would "over time" take a step that is being pushed for in government to "train our models" in the UK - a move that cabinet ministers believe would cement the UK as the number three AI "superpower" after the US and China.

"We are committed to investing in the UK in a pretty significant way," Mr Pichai said.

However, he also warned about the "immense" energy needs of AI, which made up 1.5% of the world's electricity consumption last year, according to the International Energy Agency.

Mr Pichai said action was needed, including in the UK, to develop new sources of energy and scale up energy infrastructure.

"You don't want to constrain an economy based on energy, and I think that will have consequences," he said.

He also acknowledged that the intensive energy needs of its expanding AI venture meant there was slippage on the company's climate targets, but insisted Alphabet still had a target of achieving net zero by 2030 by investing in new energy technologies.

"The rate at which we were hoping to make progress will be impacted," he said.

AI will also affect work as we know it, Mr Pichai said, calling it "the most profound technology" humankind had worked on.

"We will have to work through societal disruptions," he said, adding that it would also "create new opportunities".

"It will evolve and transition certain jobs, and people will need to adapt," he said. Those who do adapt to AI "will do better".

"It doesn't matter whether you want to be a teacher [or] a doctor. All those professions will be around, but the people who will do well in each of those professions are people who learn how to use these tools."

BBC
 

Google boss says trillion-dollar AI investment boom has 'elements of irrationality'​

Every company would be affected if the AI bubble were to burst, the head of Google's parent firm Alphabet has told the BBC.

Speaking exclusively to BBC News, Sundar Pichai said while the growth of artificial intelligence (AI) investment had been an "extraordinary moment", there was some "irrationality" in the current AI boom.

It comes amid fears in Silicon Valley and beyond of a bubble as the value of AI tech companies has soared in recent months and companies spend big on the burgeoning industry.

Asked whether Google would be immune to the impact of the AI bubble bursting, Mr Pichai said the tech giant could weather that potential storm, but also issued a warning.

"I think no company is going to be immune, including us," he said.

In a wide-ranging exclusive interview at Google's California headquarters, he also addressed energy needs, slowing down climate targets, UK investment, the accuracy of his AI models, and the effect of the AI revolution on jobs.

The interview comes as scrutiny on the state of the AI market has never been more intense.

Alphabet shares have doubled in value in seven months to $3.5tn (£2.7tn) as markets have grown more confident in the search giant's ability to fend off the threat from ChatGPT owner OpenAI.

A particular focus is Alphabet's development of specialised superchips for AI that compete with Nvidia, run by Jensen Huang, which recently reached a world first $5tn valuation.

As valuations rise, some analysts have expressed scepticism about a complicated web of $1.4tn of deals being done around OpenAI, which is expected to have revenues this year of less than one thousandth of the planned investment.

It has raised fears stock markets are heading for a repeat of the dotcom boom and bust of the late 1990s. This saw the values of early internet companies surge amid a wave of optimism for what was then a new technology, before the bubble burst in early 2000 and many share prices collapsed.

This led to some companies going bust, resulting in job losses. A drop in share prices can also hit the value of people's savings including their pension funds.

In comments echoing those made by US Federal Reserve chairman Alan Greenspan in 1996, warning of "irrational exuberance" in the market well ahead of the dotcom crash, Mr Pichai said the industry can "overshoot" in investment cycles like this.

"We can look back at the internet right now. There was clearly a lot of excess investment, but none of us would question whether the internet was profound," he said.

"I expect AI to be the same. So I think it's both rational and there are elements of irrationality through a moment like this."

His comments follow a warning from Jamie Dimon, the boss of US bank JP Morgan, who told the BBC last month that investment in AI would pay off, but some of the money poured into the industry would "probably be lost".

But Mr Pichai said Google's unique model of owning its own "full stack" of technologies - from chips to YouTube data, to models and frontier science - meant it was in a better position to ride out any AI market turbulence.

The tech giant is also expanding its footprint in the UK. In September, Alphabet announced it was investing in UK artificial intelligence, committing £5bn to infrastructure and research over the next two years.

Mr Pichai said Alphabet will develop "state of the art" research work in the UK including at its key AI unit DeepMind, based in London.

For the first time, he said Google would "over time" take a step that is being pushed for in government to "train our models" in the UK - a move that cabinet ministers believe would cement the UK as the number three AI "superpower" after the US and China.

"We are committed to investing in the UK in a pretty significant way," Mr Pichai said.

However, he also warned about the "immense" energy needs of AI, which made up 1.5% of the world's electricity consumption last year, according to the International Energy Agency.

Mr Pichai said action was needed, including in the UK, to develop new sources of energy and scale up energy infrastructure.

"You don't want to constrain an economy based on energy, and I think that will have consequences," he said.

He also acknowledged that the intensive energy needs of its expanding AI venture meant there was slippage on the company's climate targets, but insisted Alphabet still had a target of achieving net zero by 2030 by investing in new energy technologies.

"The rate at which we were hoping to make progress will be impacted," he said.

AI will also affect work as we know it, Mr Pichai said, calling it "the most profound technology" humankind had worked on.

"We will have to work through societal disruptions," he said, adding that it would also "create new opportunities".

"It will evolve and transition certain jobs, and people will need to adapt," he said. Those who do adapt to AI "will do better".

"It doesn't matter whether you want to be a teacher [or] a doctor. All those professions will be around, but the people who will do well in each of those professions are people who learn how to use these tools."

Source: BBC
 

Yann LeCun to leave Meta, launch AI startup focused on Advanced Machine Intelligence​


Nov 19 (Reuters) - Yann LeCun, one of the founding figures of modern artificial intelligence and a pivotal force at Meta Platforms (META.O), opens new tab, said on Wednesday he plans to leave the company at the end of the year to launch a new AI startup.

LeCun has been a key part of Meta's artificial intelligence ambitions for more than a decade. He joined the company in 2013 to create Facebook AI Research (FAIR), the in-house lab that helped transform Meta into one of the AI leaders.

Over 12 years, he served five as FAIR's founding director and seven as the company's chief AI scientist, guiding breakthroughs in deep learning, computer vision and large-scale language modeling that underpin products like Instagram recommendations and Meta's generative AI systems.

He developed an early form of an artificial neural network that mimicked how the human eye and brain process images — technology that later became the backbone of modern image recognition and GenAI.

LeCun, 65, said his new venture will pursue Advanced Machine Intelligence (AMI) research — a project he has developed in collaboration with colleagues at FAIR and New York University, where he teaches.

The computer scientist said he will provide more details on his new firm at a later date, but added that Meta will be a partner in the venture, reflecting what he called the company's "continued interest and support" for AMI's long-term goals.

"The creation of FAIR is my proudest non-technical accomplishment," he wrote. "The impact of FAIR on Meta, the AI field, and the wider world has been spectacular."

Meta CEO Mark Zuckerberg and CTO Andrew Bosworth have both credited LeCun with laying the foundations for Meta's current AI infrastructure, including its open-source Llama models that have become a cornerstone of the global AI research community.

LeCun is widely regarded as one of the "godfathers" of deep learning, alongside Geoffrey Hinton and Yoshua Bengio — a trio that won the 2018 Turing Award, often called the Nobel Prize of computing.

Source: REUTERS
 

Yann LeCun to leave Meta, launch AI startup focused on Advanced Machine Intelligence​


Nov 19 (Reuters) - Yann LeCun, one of the founding figures of modern artificial intelligence and a pivotal force at Meta Platforms (META.O), opens new tab, said on Wednesday he plans to leave the company at the end of the year to launch a new AI startup.

LeCun has been a key part of Meta's artificial intelligence ambitions for more than a decade. He joined the company in 2013 to create Facebook AI Research (FAIR), the in-house lab that helped transform Meta into one of the AI leaders.

Over 12 years, he served five as FAIR's founding director and seven as the company's chief AI scientist, guiding breakthroughs in deep learning, computer vision and large-scale language modeling that underpin products like Instagram recommendations and Meta's generative AI systems.

He developed an early form of an artificial neural network that mimicked how the human eye and brain process images — technology that later became the backbone of modern image recognition and GenAI.

LeCun, 65, said his new venture will pursue Advanced Machine Intelligence (AMI) research — a project he has developed in collaboration with colleagues at FAIR and New York University, where he teaches.

The computer scientist said he will provide more details on his new firm at a later date, but added that Meta will be a partner in the venture, reflecting what he called the company's "continued interest and support" for AMI's long-term goals.

"The creation of FAIR is my proudest non-technical accomplishment," he wrote. "The impact of FAIR on Meta, the AI field, and the wider world has been spectacular."

Meta CEO Mark Zuckerberg and CTO Andrew Bosworth have both credited LeCun with laying the foundations for Meta's current AI infrastructure, including its open-source Llama models that have become a cornerstone of the global AI research community.

LeCun is widely regarded as one of the "godfathers" of deep learning, alongside Geoffrey Hinton and Yoshua Bengio — a trio that won the 2018 Turing Award, often called the Nobel Prize of computing.

Source: REUTERS
Don’t like Yann Lecun. Meta is chasing AGI and ASI and Mr. Yann says that AGI will never be achieved in his interviews. He clearly does not believe in the goals of Meta. He can start his own AI start up. Some have already done that after they left OpenAI. I am talking about Ilya Sutskever. .
 
7 useful prompts to learn anything from ChatGPT: :inti

1. Specify output format
When assigning a question to ChatGPT, you can specify how the reply is formatted.
For instance: "What are the longest highways in the United States? List the top four as bullet points."

2. Explain like I'm a beginner.
Prompt: Explain [topic] in simple terms. Explain to me as if I'm a beginner.

3. Fictional World Creation
Prompt: "Create a detailed fictional world where magic is commonplace, technology is advanced, and societies are governed by sentient AI. Describe the major factions, conflicts, and cultural nuances within this world."

4. AI-Generated Art Critique
Prompt: Analyze and critique a piece of artwork generated by an advanced AI algorithm. Discuss its originality, emotional impact, and artistic merit, considering the role of AI in shaping contemporary art.

5. Quiz yourself
You learned a topic now use ChatGpt to quiz yourself and become more confident on the topics.
Prompt: Give me a short quiz that teaches me [what you want to learn]

6. Projects
Get awesome project ideas from ChatGpt
Prompt:
I am a beginner interested in .... To do this I need to know how to ..... Can you give me some beginner project ideas I could work on to strengthen my skills....

7. Political Speech Writing
Prompt: "Draft a compelling political speech advocating for comprehensive healthcare reform, addressing key issues such as access, affordability, and quality of care. Appeal to both emotions and logic to rally public support."

Source: Altiam Kabir (Facebook).
 
Meta buried 'causal' evidence of social media harm, US court filings allege

Meta shut down internal research into the mental health effects of Facebook after finding causal evidence that its products harmed users’ mental health, according to unredacted filings in a lawsuit by U.S. school districts against Meta and other social media platforms.

Make sense of the latest ESG trends affecting companies and governments with the Reuters Sustainable Switch newsletter. Sign up here.

In a 2020 research project code-named “Project Mercury,” Meta (META.O), opens new tab scientists worked with survey firm Nielsen to gauge the effect of “deactivating” Facebook, according to Meta documents obtained via discovery. To the company’s disappointment, “people who stopped using Facebook for a week reported lower feelings of depression, anxiety, loneliness and social comparison,” internal documents said.

Rather than publishing those findings or pursuing additional research, the filing states, Meta called off further work and internally declared that the negative study findings were tainted by the “existing media narrative” around the company.

Privately, however, a staffer insisted that the conclusions of the research were valid, according to the filing.

“The Nielsen study does show causal impact on social comparison,” (unhappy face emoji), an unnamed staff researcher allegedly wrote. Another staffer worried that keeping quiet about negative findings would be akin to the tobacco industry “doing research and knowing cigs were bad and then keeping that info to themselves.”

Despite Meta’s own work documenting a causal link between its products and negative mental health effects, the filing alleges, Meta told Congress that it had no ability to quantify whether its products were harmful to teenage girls.

In a statement Saturday, Meta spokesman Andy Stone said the study was stopped because its methodology was flawed and that it worked diligently to improve the safety of its products.

“The full record will show that for over a decade, we have listened to parents, researched issues that matter most, and made real changes to protect teens,” he said.

PLAINTIFFS ALLEGE PRODUCT RISKS WERE HIDDEN

The allegation of Meta burying evidence of social media harms is just one of many in a late Friday filing by Motley Rice, a law firm suing Meta, Google (GOOGL.O), opens new tab, TikTok and Snapchat (SNAP.N), opens new tab on behalf of school districts around the country. Broadly, the plaintiffs argue the companies have intentionally hidden the internally recognized risks of their products from users, parents and teachers.

TikTok, Google and Snapchat did not immediately respond to a request for comment.

Allegations against Meta and its rivals include tacitly encouraging children below the age of 13 to use their platforms, failing to address child sexual abuse content and seeking to expand the use of social media products by teenagers while they were at school. The plaintiffs also allege that the platforms attempted to pay child-focused organizations to defend the safety of their products in public.

In one instance, TikTok sponsored the National PTA and then internally boasted about its ability to influence the child-focused organization. Per the filing, TikTok officials said the PTA would “do whatever we want going forward in the fall… (t)hey’ll announce things publicly(,), (t)heir CEO will do press statements for us.”

By and large, however, the allegations against the other social media platforms are less detailed than those against Meta. The internal documents cited by the plaintiffs allege:

  1. Meta intentionally designed its youth safety features to be ineffective and rarely used, and blocked testing of safety features that it feared might be harmful to growth.
  2. Meta required users to be caught 17 times attempting to traffic people for sex before it would remove them from its platform, which a document described as “a very, very, very high strike threshold."
  3. Meta recognized that optimizing its products to increase teen engagement resulted in serving them more harmful content, but did so anyway.
  4. Meta stalled internal efforts to prevent child predators from contacting minors for years due to growth concerns, and pressured safety staff to circulate arguments justifying its decision not to act.
  5. In a text message in 2021, Mark Zuckerberg said that he wouldn’t say that child safety was his top concern “when I have a number of other areas I’m more focused on like building the metaverse.” Zuckerberg also shot down or ignored requests by Nick Clegg, Meta's then-head of global public policy, to better fund child safety work.
Meta’s Stone disputed these allegations, saying the company’s teen safety measures are effective and that the company’s current policy is to remove accounts as soon as they are flagged for sex trafficking.

He said the suit misrepresents its efforts to build safety features for teens and parents, and called its safety work “broadly effective.”

"We strongly disagree with these allegations, which rely on cherry-picked quotes and misinformed opinions,” Stone said.

The underlying Meta documents cited in the filing are not public, and Meta has filed a motion to strike the documents. Stone said the objection was to the over-broad nature of what plaintiffs are seeking to unseal, not unsealing in its entirety.

A hearing regarding the filing is set for January 26 in Northern California District Court.

 

Developers can now submit apps to ChatGPT​

We’re opening app submissions for review and publication in ChatGPT, and users can discover apps in the app directory.

Earlier this year at DevDay, we introduced apps in ChatGPT. Starting today, developers can submit apps for review and publication in ChatGPT by following our app submission guidelines⁠(opens in a new window). Apps extend ChatGPT conversations by bringing in new context and letting users take actions like order groceries, turn an outline into a slide deck, or search for an apartment. We’ve published resources to help developers build high-quality apps that users will love—based on what we’ve learned since DevDay—like best practices on what makes a great ChatGPT app⁠(opens in a new window), open-source example apps⁠(opens in a new window), an open-sourced UI library⁠(opens in a new window) for chat-native interfaces, and a step-by-step quickstart guide⁠(opens in a new window).

We’re also introducing an app directory right inside ChatGPT, where users can browse featured apps or search for any published app. The app directory is discoverable from the tools menu or directly from chatgpt.com/apps. Developers can also use deep links on other platforms to send users right to their app page in the directory.

Once users connect to apps, apps can get triggered during conversations when @ mentioned by name, or when selected from the tools menu. We’re also experimenting with ways to surface relevant, helpful apps directly within conversations—using signals like conversational context, app usage patterns, and user preferences—and giving users clear ways to provide feedback.

Building, submitting and monetizing apps​

Building a great ChatGPT app starts with designing for real user intent. Developers can use the Apps SDK—now in beta—to build chat-native experiences that bring context and action directly into ChatGPT. The strongest apps are tightly scoped, intuitive in chat, and deliver clear value by either completing real-world workflows that start in conversation or enabling new, fully AI-native experiences inside ChatGPT. We recommend reviewing the app submission guidelines⁠(opens in a new window) early to help you build a high-quality app. Additional documentation and examples are available in the developer resource hub⁠(opens in a new window).

Once ready, developers can submit apps for review and track approval status in the OpenAI Developer Platform⁠(opens in a new window). Submissions include MCP connectivity details, testing guidelines, directory metadata, and country availability settings. The first set of approved apps will begin rolling out gradually in the new year. Apps that meet our quality and safety standards are eligible to be published in the app directory, and apps that resonate with users may be featured more prominently in the directory or recommended by ChatGPT in the future.

In this early phase, developers can link out from their ChatGPT apps to their own websites or native apps to complete transactions for physical goods. We’re exploring additional monetization options over time, including digital goods, and will share more as we learn from how developers and users build and engage.

Safety and privacy​

All developers are required to follow the app submission guidelines⁠(opens in a new window) around safety, privacy, and transparency. Apps must comply with OpenAI’s usage policies, be appropriate for all audiences, and adhere to third-party terms of service when accessing their content. Developers must include clear privacy policies with every app submission and we require developers to only request the information needed to make their apps work.

When a user connects to a new app, we will disclose what types of data may be shared with the third party and provide the app’s privacy policy for review. And users are always in control: disconnect an app at any time, and it immediately loses access.

Looking ahead​

This is just the beginning. Over time, we want apps in ChatGPT to feel like a natural extension of the conversation, helping people move from ideas to action, while building a thriving ecosystem for developers. As we learn from developers and users, we’ll continue refining the experience for everyone. We also plan to grow the ecosystem of apps in ChatGPT, make apps easier to discover, and expand the ways developers can reach users and monetize their work.

Source: https://openai.com/index/developers-can-now-submit-apps-to-chatgpt/.
 
UK to bring into force law to tackle Grok AI deepfakes this week

The UK will bring into force a law which will make it illegal to create non-consensual intimate images, following widespread concerns over Elon Musk's Grok AI chatbot.

The Technology Secretary Liz Kendall said the government would also seek to make it illegal for companies to supply the tools designed to create such images.

Speaking to the Commons, Kendall said AI-generated pictures of women and children in states of undress, created without a person's consent, were not "harmless images" but "weapons of abuse".

The BBC has approached X for comment. It previously said: "Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.".

It comes hours after Ofcom announced it was launching an investigation into X over "deeply concerning reports" about Grok altering images of people.

In a statement, Kendall urged the regulator not to take "months and months" to conclude its investigation, and demanded it set out a timeline "as soon as possible".

It is currently illegal to share deepfakes of adults in the UK, but until now legislation which would make it a criminal offence to create or request them has not been enforced, despite passing in June 2025.

Kendall said she would also make it a "priority offence" in the Online Safety Act.

"The content which has circulated on X is vile. It's not just an affront to decent society, it is illegal," she said.

"Let me be crystal clear - under the Online Safety Act, sharing intimate images of people without their consent, or threatening to share them, including pictures of people in their underwear, is a criminal offence for individuals and for platforms.

"This means individuals are committing a criminal offence if they create or seek to create such content including on X, and anyone who does this should expect to face the full extent of the law."


 
As expected Anthropic is improving exponentially with its models, Haiku 4.5 has been extremely useful to me but have to say unless i can use it to create all my flows or agents I would probably not be able to move to next level.

Unfortunate time for many in coding and analyst fields though, don’t see how SAAS would employ so many folks with that much automation available.
 
I don’t know why Chinese do this, they are capable of being the best but the resort to this petty nonsense

—————

Anthropic Says Chinese AI Firms Used 16 Million Claude Queries to Copy Model


Anthropic on Monday said it identified "industrial-scale campaigns" mounted by three artificial intelligence (AI) companies, DeepSeek, Moonshot AI, and MiniMax, to illegally extract Claude's capabilities to improve their own models.


 
A huge day in American tech world and a massive shake up.

THE UNITED STATES OF AMERICA WILL NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS! That decision belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military.

The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution. Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY.

Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again! There will be a Six Month phase out period for Agencies like the Department of War who are using Anthropic’s products, at various levels. Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow.

WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about. Thank you for your attention to this matter. MAKE AMERICA GREAT AGAIN!

PRESIDENT DONALD J. TRUMP

https://truthsocial.com/@realDonaldTrump/116144552969293195





 
Open Ai isn’t able to keep up with Anthropic on productivity levels required in real world.. but definitely short term gain for Open AI and regulation hell incoming for Anthropic.
 
If Anthropic survives this and the Chinese attacks, they will make it big.

Mira should had deflected to Anthropic , her ego and ability to not realize whats at stake will destroy countless $$
 
Back
Top