What's new

Chat GPT and artificial intelligence

I think you mean parameters , DeepSeek R1 is open source, you can actually directly use their biggest model or variants for free.(MIT license, Hugging face).

OpenAi O1 was not open source irrespective of its name, the computation in this case meaning how the models are trained, as per DeepSeek it only took them fraction of the cost at 5.58 million $ using just Nvidia 800 and very Low GPU hours compared to American reasoning ones like OpenAi o1.

o1 and R1 are the only reasoning models out there meaning they are the only ones that reflect on the Input and take out hallucinations .
But here is the catch R1 is opensource , that makes it absolutely ridiculous for o1 to be charging and also Nvidea doesn’t need to charge for so many GPUs for the so called “Ai Data centers”, as DeepSeek has shown it can be created at fraction of the cost using existing technology.

In this case OpenAi and Nvidia are the biggest losers due to their hype and evaluations.

Interesting point. Why are O1 and R1 the only reasoning models? What about Claude, Llama family of models by Meta, Google's Gemini, Microsoft's Phi-3 etc? Aren't these also reasoning models or am I missing something?
 
@JaDed why do you think India hasn't been able to catch up to China in tech and other industrial advancements? Both countries have similar demographics and population size, work ethic and culture that is very strongly geared towards science and engineering. I'm not trying to undermine India but why is it so far behind China considering how India has no shortage of brilliant scientists and a stream of knowledge base shared with the West. India has accomplished major feats like landing on the moon but most of it seems like political gimmicks not actual breakthroughs that make India a serious contender or competitor in various industries.
 
Interesting point. Why are O1 and R1 the only reasoning models? What about Claude, Llama family of models by Meta, Google's Gemini, Microsoft's Phi-3 etc? Aren't these also reasoning models or am I missing something?
You are right about Gemini but Phi-3 is not really a LLM , its a small model made specifically for one part of reasoning.
Llama did achieve reasoning , I should had been more specific about where R1 shines compared to O1 on reasoning.
 
@JaDed why do you think India hasn't been able to catch up to China in tech and other industrial advancements? Both countries have similar demographics and population size, work ethic and culture that is very strongly geared towards science and engineering. I'm not trying to undermine India but why is it so far behind China considering how India has no shortage of brilliant scientists and a stream of knowledge base shared with the West. India has accomplished major feats like landing on the moon but most of it seems like political gimmicks not actual breakthroughs that make India a serious contender or competitor in various industries.
So many reasons, one big one being religion, Chinese becoming atheists , their entire family view is based on Science and technology, also scientifically Chinese are better than most Indians, even at Math which we flaunt about.

Chinese also have a singular view on things, for example their government censors enough for majority to have one opinion on controversial topics, in case of a capable leader that works out to be great as he can implement his vision Deng Xiaoping , and if its bad it can be really bad like Mao..
 
So many reasons, one big one being religion, Chinese becoming atheists , their entire family view is based on Science and technology, also scientifically Chinese are better than most Indians, even at Math which we flaunt about.

Chinese also have a singular view on things, for example their government censors enough for majority to have one opinion on controversial topics, in case of a capable leader that works out to be great as he can implement his vision Deng Xiaoping , and if its bad it can be really bad like Mao..
Thanks for the insight. I do think genetics play a key role also and Chinese tend to be smarter than an average Indian. There's also a historical and political context too where chinese communism placed a strong emphasis on education and collective progress,whereas India with its colonial history and diverse socio economic structure has faced challenges in achieving similar levels of uniform development.
 
Thanks for the insight. I do think genetics play a key role also and Chinese tend to be smarter than an average Indian. There's also a historical and political context too where chinese communism placed a strong emphasis on education and collective progress,whereas India with its colonial history and diverse socio economic structure has faced challenges in achieving similar levels of uniform development.
I think Genetics is cultivated , Japanese are an example.
Within India itself different ethnicities have cultivated education and science. South Indians and Marathis overall place higher importance on education, Bengalis used to do it once as well but thanks to their political systems and geopolitical issues aren’t that great as before.

I would also like to add Chinese economic Liberation of economy happened in 1979, India’s in 1991..Chinese itself have said India missed the bus in 60s, when Shastri was about to do the same, instead we went the other way with socialism which made us bankrupt in 1991.

I do think we have a lot of Industries that have good R&D, but unless we demolish the conservative family system and employ more and more women , we will always remain way behind China. (current Indian female employment is 23% , which is miserable).
 
How a top Chinese AI model overcame US sanctions

The AI community is abuzz over DeepSeek R1, a new open-source reasoning model.

The model was developed by the Chinese AI startup DeepSeek, which claims that R1 matches or even surpasses OpenAI’s ChatGPT o1 on multiple key benchmarks but operates at a fraction of the cost.

“This could be a truly equalizing breakthrough that is great for researchers and developers with limited resources, especially those from the Global South,” says Hancheng Cao, an assistant professor in information systems at Emory University.

DeepSeek’s success is even more remarkable given the constraints facing Chinese AI companies in the form of increasing US export controls on cutting-edge chips. But early evidence shows that these measures are not working as intended. Rather than weakening China’s AI capabilities, the sanctions appear to be driving startups like DeepSeek to innovate in ways that prioritize efficiency, resource-pooling, and collaboration.

To create R1, DeepSeek had to rework its training process to reduce the strain on its GPUs, a variety released by Nvidia for the Chinese market that have their performance capped at half the speed of its top products, according to Zihan Wang, a former DeepSeek employee and current PhD student in computer science at Northwestern University.

DeepSeek R1 has been praised by researchers for its ability to tackle complex reasoning tasks, particularly in mathematics and coding. The model employs a “chain of thought” approach similar to that used by ChatGPT o1, which lets it solve problems by processing queries step by step.

Dimitris Papailiopoulos, principal researcher at Microsoft’s AI Frontiers research lab, says what surprised him the most about R1 is its engineering simplicity. “DeepSeek aimed for accurate answers rather than detailing every logical step, significantly reducing computing time while maintaining a high level of effectiveness,” he says.

The nation has signaled it’s prepared to hit back harder still, in ways that could inflict serious economic pain on its biggest economic rival.

DeepSeek has also released six smaller versions of R1 that are small enough to run locally on laptops. It claims that one of them even outperforms OpenAI’s o1-mini on certain benchmarks. “DeepSeek has largely replicated o1-mini and has open sourced it,” tweeted Perplexity CEO Aravind Srinivas. DeepSeek did not reply to MIT Technology Review’s request for comments.

Despite the buzz around R1, DeepSeek remains relatively unknown. Based in Hangzhou, China, it was founded in July 2023 by Liang Wenfeng, an alumnus of Zhejiang University with a background in information and electronic engineering. It was incubated by High-Flyer, a hedge fund that Liang founded in 2015. Like Sam Altman of OpenAI, Liang aims to build artificial general intelligence (AGI), a form of AI that can match or even beat humans on a range of tasks.

Training large language models (LLMs) requires a team of highly trained researchers and substantial computing power. In a recent interview with the Chinese media outlet LatePost, Kai-Fu Lee, a veteran entrepreneur and former head of Google China, said that only “front-row players” typically engage in building foundation models such as ChatGPT, as it’s so resource-intensive. The situation is further complicated by the US export controls on advanced semiconductors. High-Flyer’s decision to venture into AI is directly related to these constraints, however. Long before the anticipated sanctions, Liang acquired a substantial stockpile of Nvidia A100 chips, a type now banned from export to China. The Chinese media outlet 36Kr estimates that the company has over 10,000 units in stock, but Dylan Patel, founder of the AI research consultancy SemiAnalysis, estimates that it has at least 50,000. Recognizing the potential of this stockpile for AI training is what led Liang to establish DeepSeek, which was able to use them in combination with the lower-power chips to develop its models.

Tech giants like Alibaba and ByteDance, as well as a handful of startups with deep-pocketed investors, dominate the Chinese AI space, making it challenging for small or medium-sized enterprises to compete. A company like DeepSeek, which has no plans to raise funds, is rare.

Zihan Wang, the former DeepSeek employee, told MIT Technology Review that he had access to abundant computing resources and was given freedom to experiment when working at DeepSeek, “a luxury that few fresh graduates would get at any company.”

In an interview with the Chinese media outlet 36Kr in July 2024 Liang said that an additional challenge Chinese companies face on top of chip sanctions, is that their AI engineering techniques tend to be less efficient. “We [most Chinese companies] have to consume twice the computing power to achieve the same results. Combined with data efficiency gaps, this could mean needing up to four times more computing power. Our goal is to continuously close these gaps,” he said.

But DeepSeek found ways to reduce memory usage and speed up calculation without significantly sacrificing accuracy. “The team loves turning a hardware challenge into an opportunity for innovation,” says Wang.

Liang himself remains deeply involved in DeepSeek’s research process, running experiments alongside his team. “The whole team shares a collaborative culture and dedication to hardcore research,” Wang says.

As well as prioritizing efficiency, Chinese companies are increasingly embracing open-source principles. Alibaba Cloud has released over 100 new open-source AI models, supporting 29 languages and catering to various applications, including coding and mathematics. Similarly, startups like Minimax and 01.AI have open-sourced their models.

According to a white paper released last year by the China Academy of Information and Communications Technology, a state-affiliated research institute, the number of AI large language models worldwide has reached 1,328, with 36% originating in China. This positions China as the second-largest contributor to AI, behind the United States.

“This generation of young Chinese researchers identify strongly with open-source culture because they benefit so much from it,” says Thomas Qitong Cao, an assistant professor of technology policy at Tufts University.

“The US export control has essentially backed Chinese companies into a corner where they have to be far more efficient with their limited computing resources,” says Matt Sheehan, an AI researcher at the Carnegie Endowment for International Peace. “We are probably going to see a lot of consolidation in the future related to the lack of compute.”

That might already have started to happen. Two weeks ago, Alibaba Cloud announced that it has partnered with the Beijing-based startup 01.AI, founded by Kai-Fu Lee, to merge research teams and establish an “industrial large model laboratory.”

“It is energy-efficient and natural for some kind of division of labor to emerge in the AI industry,” says Cao, the Tufts professor. “The rapid evolution of AI demands agility from Chinese firms to survive.”

SOURCE: https://www.technologyreview.com/2025/01/24/1110526/china-deepseek-top-ai-despite-sanctions/
 
US tech stocks steady after DeepSeek AI app shock

US tech stocks were steady on Tuesday after they slumped on Monday following the sudden rise of Chinese-made artificial intelligence (AI) app DeepSeek.

Shares in chip giant Nvidia were up over 6% by mid-day trade having sank on Monday, as experts said the US AI sell-off may have been an over-reaction.

The market hit came as investors rapidly adjusted bets on AI, after DeepSeek's claim that its model was made at a fraction of the cost of those of its rivals.

Analysts said the development raised questions about the future of America's AI dominance and the scale of investments US firms are planning.

US President Donald Trump described the moment as "a wake-up call" for the US tech industry, while also suggesting that it could ultimately prove " a positive" for the US.

"If you could do it cheaper, if you could do it [for] less [and] get to the same end result. I think that's a good thing for us," he told reporters on board Air Force One.

He also said he was not concerned about the breakthrough, adding the US will remain a dominant player in the field.

Optimism about AI investments has powered much of the boom in US stock markets over the last two years, raising fears of a possible bubble.

DeepSeek has become the most downloaded free app in the US just a week after it was launched.

Its emergence comes as the US has been warning of a tech race with China, and taking steps to restrict the sale of the advanced chip technology that powers AI to China.

To continue their work without steady supplies of imported advanced chips, Chinese AI developers have shared their work with each other and experimented with new approaches to the technology.

This has resulted in AI models that require far less computing power than before.

It also means that they cost a lot less than previously thought possible, which has the potential to upend the industry.

Nvidia - the company behind the high-tech chips that dominate many AI investments, that had seen its share price surge in the last two years due to growing demand - was the hardest hit on Monday.

Its share price dropped by roughly 17% on Monday, wiping roughly $600bn off its market value.

Janet Mui, head of market analysis at RBC Brewin Dolphin, said investors' first response to something that appears groundbreaking is to sell because of the uncertainty.

But she said she expected many companies, like Apple, to benefit if the cost of AI models becomes becomes cheaper.

It could also be a boon for other tech giants, which have faced scrutiny for their high spending on AI so far.


 
China's DeepSeek AI on US national security radar

US officials are considering the national security implications of an apparent artificial intelligence (AI) breakthrough by Chinese firm DeepSeek, according to White House press secretary Karoline Leavitt.

The announcement comes as there were reports the US navy has banned its members from using DeepSeek's apps due to "potential security and ethical concerns".

Meanwhile, the maker of ChatGPT, OpenAI, has promised to work closely with the US government to prevent rivals from taking its technology.

Earlier this week, DeepSeek's reportedly cheap yet powerful AI model caused a slump in the stocks of US technology firms as investors questioned the billions of dollars they are spending on new AI infrastructure.

"I spoke with [the National Security Council] this morning, they are looking into what [the national security implications] may be," said Ms Leavitt, who also restated US President Donald Trump's remarks a day earlier that DeepSeek should be a wake-up call for the US tech industry.

According to CNBC, the US navy has sent an email to its staff warning them not to use the DeepSeek app due to "potential security and ethical concerns associated with the model's origin and usage".

The US Navy did not immediately respond to a request for comment from BBC News.

Speaking on Fox News, the recently appointed "White House AI and crypto czar", David Sacks, also suggested that DeepSeek may have used the models developed by top US firm OpenAI to get better.

This process - which involves one AI model learning from another - is called knowledge distillation.

"There's substantial evidence that what DeepSeek did here is they distilled the knowledge out of OpenAI's models," Mr Sacks said. "I think one of the things you're going to see over the next few months is our leading AI companies taking steps to try and prevent distillation... That would definitely slow down some of these copycat models."

OpenAI echoed this in a later statement that said Chinese and other companies are "constantly trying to distill the models of leading US AI companies."

"As the leading builder of AI, we engage in countermeasures to protect our [intellectual property]... and believe as we go forward that it is critically important that we are working closely with the U.S. government to best protect the most capable models".

Meanwhile, DeepSeek says it has been the target of cyber attacks. On Monday it said it would temporarily limit registrations because of "large-scale malicious attacks" on its software.

A banner currently showing on the company's website says registration may be busy as a result of the attacks.

Yuyuan Tantian, a social media channel under China's state broadcaster CCTV, claims the firm has faced "several" cyber attacks in recent weeks, which have increased in "intensity".

DeepSeek shot to fame only last week as AI geeks lauded its latest AI model and people began downloading its chatbot on app stores. Its rise caused a slump in US tech stocks, many of which have since recovered some ground.

But America's AI industry was shaken by the apparent breakthrough, especially because of the prevailing view that the US was far ahead in the race. A slew of trade restrictions banning China's access to high-end chips was believed to have cemented this.

Although China has boosted investment in advanced tech to diversify its economy, DeepSeek is not one of the big Chinese firms that have been developing AI models to rival US-made ChatGPT.

Experts say the US still has an advantage - it is home to some of the biggest chip companies - and that it's unclear yet exactly how DeepSeek built its model and how far it can go.

As DeepSeek rattled markets this week, President Trump described it as "a wake-up call" for the US tech industry, while suggesting that it could ultimately prove to be a "positive" sign.

"If you could do it cheaper, if you could do it [for] less [and] get to the same end result. I think that's a good thing for us," he told reporters on board Air Force One.

He also said he was not concerned about the breakthrough, adding the US will remain a dominant player in the field.

BBC
 
Alibaba releases AI model it says surpasses DeepSeek

Chinese tech company Alibaba (9988.HK), opens new tab on Wednesday released a new version of its Qwen 2.5 artificial intelligence model that it claimed surpassed the highly-acclaimed DeepSeek-V3.

The unusual timing of the Qwen 2.5-Max's release, on the first day of the Lunar New Year when most Chinese people are off work and with their families, points to the pressure Chinese AI startup DeepSeek's meteoric rise in the past three weeks has placed on not just overseas rivals, but also its domestic competition.

"Qwen 2.5-Max outperforms ... almost across the board GPT-4o, DeepSeek-V3 and Llama-3.1-405B," Alibaba's cloud unit said in an announcement posted on its official WeChat account, referring to OpenAI and Meta's most advanced open-source AI models.

The Jan. 10 release of DeepSeek's AI assistant, powered by the DeepSeek-V3 model, as well as the Jan. 20 release of its R1 model, has shocked Silicon Valley and caused tech shares to plunge, with the Chinese startup's purportedly low development and usage costs prompting investors to question huge spending plans by leading AI firms in the United States.

But DeepSeek's success has also led to a scramble among its domestic competitors to upgrade their own AI models.

Two days after the release of DeepSeek-R1, TikTok owner ByteDance released an update to its flagship AI model, which it claimed outperformed Microsoft-backed OpenAI's o1 in AIME, a benchmark test that measures how well AI models understand and respond to complex instructions.

This echoed DeepSeek's claim that its R1 model rivalled OpenAI's o1 on several performance benchmarks.

DEEPSEEK VERSUS DOMESTIC COMPETITORS

The predecessor of DeepSeek's V3 model, DeepSeek-V2, triggered an AI model price war in China after it was released last May.

The fact that DeepSeek-V2 was open-source and unprecedentedly cheap, only 1 yuan ($0.14) per 1 million tokens - or units of data processed by the AI model - led to Alibaba's cloud unit announcing price cuts of up to 97% on a range of models.

Other Chinese tech companies followed suit, including Baidu (9888.HK), opens new tab, which released China's first equivalent to ChatGPT in March 2023, and the country's most valuable internet company Tencent (0700.HK), opens new tab.

Liang Wenfeng, DeepSeek's enigmatic founder, said in a rare interview with Chinese media outlet Waves in July that the startup "did not care" about price wars and that achieving AGI (artificial general intelligence) was its main goal.

OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.

While large Chinese tech companies like Alibaba have hundreds of thousands of employees, DeepSeek operates like a research lab, staffed mainly by young graduates and doctorate students from top Chinese universities.

Liang said in his July interview that he believed China's largest tech companies might not be well suited to the future of the AI industry, contrasting their high costs and top-down structures with DeepSeek's lean operation and loose management style.

"Large foundational models require continued innovation, tech giants' capabilities have their limits," he said.

SOURCE: https://www.reuters.com/technology/...l-it-claims-surpasses-deepseek-v3-2025-01-29/
 
OpenAI says Chinese rivals using its work for their AI apps

The maker of ChatGPT, OpenAI, has complained that rivals, including those in China, are using its work to make rapid advances in developing their own artificial intelligence (AI) tools.

The status of OpenAI - and other US firms - as the world leaders in AI has been dramatically undermined this week by the sudden emergence of DeepSeek, a Chinese app that can emulate the performance of ChatGPT, apparently at a fraction of the cost.

Bloomberg has reported that Microsoft is investigating whether data belonging to OpenAI - which it is a major investor in - has been used in an unauthorised way.

The BBC has contacted Microsoft and DeepSeek for comment.

OpenAI's concerns have been echoed by the recently appointed White House "AI and crypto czar", David Sacks.

Speaking on Fox News, he suggested that DeepSeek may have used the models developed by OpenAI to get better, a process called knowledge distillation.

"There's substantial evidence that what DeepSeek did here is they distilled the knowledge out of OpenAI's models," Mr Sacks said.

"I think one of the things you're going to see over the next few months is our leading AI companies taking steps to try and prevent distillation... That would definitely slow down some of these copycat models."

The US has already taken steps to guard its AI advances, with rules that seek to cut China off from advanced chips and steer investments to the US in the name of national security.

At his confirmation hearing on Thursday, Trump's nominee for Commerce Secretary, Howard Lutnick, also shared concerns about theft and raised the prospect of further US action to protect US AI companies.

"What this showed is that our export controls, not backed by tariffs, are like a whack-a-mole model," Lutnick says.

In a statement, OpenAI said Chinese and other companies were "constantly trying to distil the models of leading US AI companies".

"As we go forward... it is critically important that we are working closely with the US government to best protect the most capable models," it added.

'Deceptive' claims

Naomi Haefner, assistant professor of technology management at the University of St. Gallen in Switzerland, said the question of distillation could throw the notion that DeepSeek created its product for a fraction of the cost into doubt.

"It is unclear whether DeepSeek really trained its models from scratch," she said.

"OpenAI have stated that they believe DeepSeek may have misappropriated large amounts of data from them.

"If this is the case, then the claims about training the model very cheaply are deceptive. Until someone replicates the training approach we won't know for sure whether such cost-efficient training is really possible."

Crystal van Oosterom, AI Venture Partner at OpenOcean, agreed that "DeepSeek has clearly built upon publicly available research from major American and European institutions and companies".

However, it is not clear how problematic the idea of "building on" the work of others is.

This is especially true in AI, where the accusation of disrespecting intellectual property rights has been frequently levelled at major US AI firms.

Security and ethics

US officials are also considering the national security implications of DeepSeek's emergence, according to White House press secretary Karoline Leavitt.

"I spoke with [the National Security Council] this morning, they are looking into what [the national security implications] may be," said Ms Leavitt, who also restated US President Donald Trump's remarks a day earlier that DeepSeek should be a wake-up call for the US tech industry.

The announcement comes after the US navy reportedly banned its members from using DeepSeek's apps due to "potential security and ethical concerns".

According to CNBC, the US navy has sent an email to its staff warning them not to use the DeepSeek app due to "potential security and ethical concerns associated with the model's origin and usage".

The Navy did not immediately respond to a request for comment from BBC News.

Data safety experts have warned users to be careful with the tool, given it collects large amounts of personal data and stores it in servers in China.

Meanwhile, DeepSeek says it has been the target of cyber attacks. On Monday it said it would temporarily limit registrations because of "large-scale malicious attacks" on its software.

A banner showing on the company's website says registration may be busy as a result of the attacks.

BBC
 
OpenAI Chief Says It Needs New Open-source Strategy

OpenAI chief Sam Altman on Friday said his high-profile artificial intelligence company is "on the wrong side of history" when it comes to being open about how its technology works.

Altman's comments came during an Ask Me Anything session on Reddit where he fielded questions including whether he would consider publishing OpenAI research.

Altman replied he was in favor of the idea and that it is a topic of discussion inside San Francisco-based OpenAI.

"I personally think we have been on the wrong side of history here and need to figure out a different open source strategy," Altman said.

"Not everyone at OpenAI shares this view, and it's also not our current highest priority."

Chinese AI newcomer DeepSeek has made headlines for its R1 chatbot's supposed low cost and high performance, but also its claim to be a public-spirited "open-source" project in contrast to closed alternatives from OpenAI and Google.

Open source refers to the practice of programmers revealing the source code of their software, rather than just the "compiled" program ready to run on a computer.

This has clashed with private companies' pursuit of revenue and intellectual property protection.

Meta, DeepSeek and France-based AI developer Mistral claim to set themselves apart by allowing developers free access to their tools' inner workings.

A member of the Reddit group asked Altman whether DeepSeek has changed his plans for future OpenAI models.

"It's a very good model," Altman said of DeepSeek.

"We will produce better models, but we will maintain less of a lead than we did in previous years."

AFP
 

Alibaba releases AI model it says surpasses DeepSeek​


Chinese tech company Alibaba (9988.HK), opens new tab on Wednesday released a new version of its Qwen 2.5 artificial intelligence model that it claimed surpassed the highly-acclaimed DeepSeek-V3.

The unusual timing of the Qwen 2.5-Max's release, on the first day of the Lunar New Year when most Chinese people are off work and with their families, points to the pressure Chinese AI startup DeepSeek's meteoric rise in the past three weeks has placed on not just overseas rivals, but also its domestic competition.

"Qwen 2.5-Max outperforms ... almost across the board GPT-4o, DeepSeek-V3 and Llama-3.1-405B," Alibaba's cloud unit said in an announcement posted on its official WeChat account, referring to OpenAI and Meta's most advanced open-source AI models.

The Jan. 10 release of DeepSeek's AI assistant, powered by the DeepSeek-V3 model, as well as the Jan. 20 release of its R1 model, has shocked Silicon Valley and caused tech shares to plunge, with the Chinese startup's purportedly low development and usage costs prompting investors to question huge spending plans by leading AI firms in the United States.

But DeepSeek's success has also led to a scramble among its domestic competitors to upgrade their own AI models.

Two days after the release of DeepSeek-R1, TikTok owner ByteDance released an update to its flagship AI model, which it claimed outperformed Microsoft-backed OpenAI's o1 in AIME, a benchmark test that measures how well AI models understand and respond to complex instructions.

This echoed DeepSeek's claim that its R1 model rivalled OpenAI's o1 on several performance benchmarks.

The predecessor of DeepSeek's V3 model, DeepSeek-V2, triggered an AI model price war in China after it was released last May.

The fact that DeepSeek-V2 was open-source and unprecedentedly cheap, only 1 yuan ($0.14) per 1 million tokens - or units of data processed by the AI model - led to Alibaba's cloud unit announcing price cuts of up to 97% on a range of models.

Other Chinese tech companies followed suit, including Baidu (9888.HK), opens new tab, which released China's first equivalent to ChatGPT in March 2023, and the country's most valuable internet company Tencent (0700.HK), opens new tab.

Liang Wenfeng, DeepSeek's enigmatic founder, said in a rare interview with Chinese media outlet Waves in July that the startup "did not care" about price wars and that achieving AGI (artificial general intelligence) was its main goal.

OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.

While large Chinese tech companies like Alibaba have hundreds of thousands of employees, DeepSeek operates like a research lab, staffed mainly by young graduates and doctorate students from top Chinese universities.

Liang said in his July interview that he believed China's largest tech companies might not be well suited to the future of the AI industry, contrasting their high costs and top-down structures with DeepSeek's lean operation and loose management style.

"Large foundational models require continued innovation, tech giants' capabilities have their limits," he said.

 
Google drops pledge on AI use for weapons

Alphabet, the parent company of technology giant Google, is no longer promising that it will never use artificial intelligence (AI) for purposes such as developing weapons and surveillance tools.

The firm has rewritten the principles guiding its use of AI, dropping a section which ruled out uses that were "likely to cause harm".

In a blog post Google senior vice president James Manyika, and Demis Hassabis, who leads the AI lab Google DeepMind, defended the move.

They argue businesses and democratic governments need to work together on AI that "supports national security".

There is debate amongst AI experts and professionals over how the powerful new technology should be governed in broad terms, how far commercial gains should be allowed to determine its direction, and how best to guard against risks for humanity in general.

There is also controversy around the use of AI on the battlefield and in surveillance technologies.

The blog said the company's original AI principles published in 2018 needed to be updated as the technology had evolved.

"Billions of people are using AI in their everyday lives. AI has become a general-purpose technology, and a platform which countless organisations and individuals use to build applications.

"It has moved from a niche research topic in the lab to a technology that is becoming as pervasive as mobile phones and the internet itself," the blog post said.

As a result baseline AI principles were also being developed, which could guide common strategies, it said.

However, Mr Hassabis and Mr Manyika said the geopolitical landscape was becoming increasingly complex.

"We believe democracies should lead in AI development, guided by core values like freedom, equality and respect for human rights," the blog post said.

"And we believe that companies, governments and organisations sharing these values should work together to create AI that protects people, promotes global growth and supports national security."

The blog post was published just ahead of Alphabet's end of year financial report, showing results that were weaker than market expectations, and knocking back its share price.

That was despite a 10% rise in revenue from digital advertising, its biggest earner, boosted by US election spending.

In its earnings report the company said it would spend $75bn ($60bn) on AI projects this year, 29% more than Wall Street analysts had expected.

The company is investing in the infrastructure to run AI, AI research, and applications such as AI-powered search.

Google's AI platform Gemini now appears at the top of Google search results, offering an AI written summary, and pops up on Google Pixel phones.

Originally, long before the current surge of interest in the ethics of AI, Google's founders, Sergei Brin and Larry Page, said their motto for the firm was "don't be evil". When the company was restructured under the name Alphabet Inc in 2015 the parent company switched to "Do the right thing".

Since then Google staff have sometimes pushed back against the approach taken by their executives. In 2018 the firm did not renew a contract for AI work with the US Pentagon following a resignations and a petition signed by thousands of employees.

They feared "Project Maven" was the first step towards using artificial intelligence for lethal purposes.

BBC
 
Which countries have banned DeepSeek and why?

This week, government agencies in countries including South Korea and Australia have blocked access to Chinese artificial intelligence (AI) startup DeepSeek’s new AI chatbot programme, mostly for government employees.

Other countries, including the United States, have said they may also seek to block DeepSeek from government employees’ mobile devices, according to media reports. All cite “security concerns” about the Chinese technology and a lack of clarity about how users’ personal information is handled by the operator.

Last month, DeepSeek made headlines after it caused share prices in US tech companies to plummet, after it claimed that its model would cost only a fraction of the money its competitors had spent on their own AI programmes to build. The news caused social media users to joke: “I can’t believe ChatGPT lost its job to AI.”


 

DeepSeek ‘excessively’ collects personal data: South Korea spy agency​


South Korea’s spy agency has accused Chinese AI app DeepSeek of “excessively” collecting personal data and using all input data to train itself, and questioned the app’s responses to questions relating to issues of national pride.

The National Intelligence Service (NIS) said it sent an official notice to government agencies last week urging them to take security precautions over the artificial intelligence app.

“Unlike other generative AI services, it has been confirmed that chat records are transferable as it includes a function to collect keyboard input patterns that can identify individuals and communicate with Chinese companies’ servers such as volceapplog.com,” the NIS said in a statement issued on Sunday.

Some government ministries in South Korea have blocked access to the app, citing security concerns, joining Australia and Taiwan in warning about or placing restrictions on DeepSeek.

The NIS said DeepSeek gives advertisers unlimited access to user data and stores South Korean users’ data in Chinese servers. Under Chinese law, the Chinese government would be able to access such information when requested, the agency added.

DeepSeek also provided different answers to potentially sensitive questions in different languages, the NIS noted.

It cited one such question as asking for the origin of kimchi - a spicy, fermented dish that is a staple in South Korea.

When asked about it in Korean, the app said kimchi is a Korean dish, the NIS said.

Asked the same question in Chinese, it said the dish originated from China, it said. DeepSeek’s responses were corroborated by Reuters.

The origin of kimchi has at times been a source of contention between South Koreans and Chinese social media users in recent years.

DeepSeek has also been accused of censoring responses to political questions such as the 1989 Tiananmen Square crackdown, which prompt the app to suggest changing the subject: “Let’s talk about something else.”

DeepSeek did not immediately respond to an emailed request for comment.

When asked about moves by South Korean government departments to block DeepSeek, a Chinese foreign ministry spokesperson told a briefing on February 6 that the Chinese government attached great importance to data privacy and security and protected it in accordance with the law.

The spokesperson also said Beijing would never ask any company or individual to collect or store data in breach of laws.

 
Ex-Google boss fears AI could be used by terrorists

The former chief executive of Google is worried artificial intelligence could be used by terrorists or "rogue states" to "harm innocent people."

Eric Schmidt told the BBC: "The real fears that I have are not the ones that most people talk about AI - I talk about extreme risk."

The tech billionaire, who held senior posts at Google from 2001 to 2017, told the Today programme "North Korea, or Iran, or even Russia" could adopt and misuse the technology to create biological weapons.

He called for government oversight on private tech companies which are developing AI models, but warned over-regulation could stifle innovation.

Mr Schmidt agreed with US export controls on powerful microchips which power the most advanced AI systems.

Before he left office, former US President Joe Biden restricted the export of microchips to all but 18 countries, in order to slow adversaries' progress on AI research.

The decision could still be reversed by Donald Trump.

"Think about North Korea, or Iran, or even Russia, who have some evil goal," Mr Schmidt said.

"This technology is fast enough for them to adopt that they could misuse it and do real harm," he told Today presenter Amol Rajan.

He added AI systems, in the wrong hands, could be used to develop weapons to create "a bad biological attack from some evil person."

"I'm always worried about the 'Osama Bin Laden' scenario, where you have some truly evil person who takes over some aspect of our modern life and uses it to harm innocent people," he said.

Bin Laden orchestrated the 9/11 attacks in 2001, where al-Qaeda terrorists took control of planes to kill thousands of people on American soil.

Mr Schmidt proposed a balance between government oversight of AI development and over-regulation of the sector.

"The truth is that AI and the future is largely going to be built by private companies," Mr Schmidt said.

"It's really important that governments understand what we're doing and keep their eye on us."

He added: "We're not arguing that we should unilaterally be able to do these things without oversight, we think it should be regulated."

He was speaking from Paris, where the AI Action Summit finished with the US and UK refusing to sign the agreement.

US Vice President JD Vance said regulation would "kill a transformative industry just as it's taking off".

Mr Schmidt said the result of too much regulation in Europe "is that the AI revolution, which is the most important revolution in my opinion since electricity, is not going to be invented in Europe."

He also said the large tech companies "did not understand 15 years ago" the potential that AI had, but does now.

"My experience with the tech leaders is that they do have an understanding of the impact they're having, but they might make a different values judgment than the government would make," he said.

Smartphone ban for children

Mr Schmidt was head of Google when the company bought Android, the company which now makes the most-used mobile phone operating system in the world.

He now supports initiatives to keep phones out of schools.

"I'm one of the people who did not understand, and I'll take responsibility that the world does not work perfectly the way us tech people think it is," he said.

"The situation with children is particularly disturbing to me."

"I think smartphones with a kid can be safe," he said, "they just need to be moderated... we can all agree that children should be protected from the bad of the online world."

On social media - where he has supported proposals for a ban on children under 16 - he added: "Why would we run such a large, uncontrolled experiment on the most important people in the world, which is the next generation?"

Campaigners for limiting children's smartphone usage argue phones are addictive and "have lured children away from the activities that are indispensable to healthy development".

Australia's parliament passed a law to ban social media use for under-16s in 2024, with Prime Minister Anthony Albanese saying it was important to protect children from its "harms".

A recent study published in the medical journal The Lancet suggested that mobile phone bans in schools did not improve students' behaviour or grades.

But it did find that spending longer on smartphones and social media in general was linked with worse results for all of those measures.

BBC
 

South Korea bans new downloads of China's DeepSeek AI​


South Korea has banned new downloads of China's DeepSeek artificial intelligence (AI) chatbot, according to the country's personal data protection watchdog.

The government agency said the AI model will become available again to South Korean users when "improvements and remedies" are made to ensure it complies with the country's personal data protection laws.

In the week after it made global headlines, DeepSeek became hugely popular in South Korea leaping to the top of app stores with over a million weekly users.

But its rise in popularity also attracted scrutiny from countries around the world which have imposed restrictions on the app over privacy and national security concerns.

South Korea's Personal Information Protection Commission said the DeepSeek app became unavailable on Apple's App Store and Google Play on Saturday evening.

It came after several South Korean government agencies banned their employees from downloading the chatbot to their work devices.

South Korea's acting president Choi Sang-mok has described Deepseek as a "shock", that could impact the country's industries, beyond AI.

Despite the suspension of new downloads, people who already have it on their phones will be able to continue using it or they may just access it via DeepSeek's website.

China's Deepseek rocked the technology industry, the markets and America's confidence in its AI leadership, when it released its latest app at the end of last month.

Its rapid rise as one of the world's favourite AI chatbots sparked concerns in different jurisdictions.

Aside from South Korea, Taiwan and Australia have also banned it from all government devices.

Italy's regulator, which briefly banned ChatGPT in 2023, has done the same with DeepSeek, which has been asked to address concerns over its privacy policy before it becomes available again on app stores.

Meanwhile, lawmakers in the US have proposed a bill banning DeepSeek from federal devices, citing surveillance concerns.

At the state-government level, Texas, Virginia and New York, have already introduced such rules for their employees.

DeepSeek's "large language model" (LLM) has reasoning capabilities that are comparable to US models such as OpenAI's o1, but reportedly requires a fraction of the cost to train and run.

That has raised questions about the billions of dollars being invested into AI infrastructure in the US and elsewhere.

 
I am using DeepSeek for the first time today. I am loving it. Seems better than ChatGPT.
 
Ex-Google boss fears AI could be used by terrorists

The former chief executive of Google is worried artificial intelligence could be used by terrorists or "rogue states" to "harm innocent people."

Eric Schmidt told the BBC: "The real fears that I have are not the ones that most people talk about AI - I talk about extreme risk."

The tech billionaire, who held senior posts at Google from 2001 to 2017, told the Today programme "North Korea, or Iran, or even Russia" could adopt and misuse the technology to create biological weapons.

He called for government oversight on private tech companies which are developing AI models, but warned over-regulation could stifle innovation.

Mr Schmidt agreed with US export controls on powerful microchips which power the most advanced AI systems.

Before he left office, former US President Joe Biden restricted the export of microchips to all but 18 countries, in order to slow adversaries' progress on AI research.

The decision could still be reversed by Donald Trump.

"Think about North Korea, or Iran, or even Russia, who have some evil goal," Mr Schmidt said.

"This technology is fast enough for them to adopt that they could misuse it and do real harm," he told Today presenter Amol Rajan.

He added AI systems, in the wrong hands, could be used to develop weapons to create "a bad biological attack from some evil person."

"I'm always worried about the 'Osama Bin Laden' scenario, where you have some truly evil person who takes over some aspect of our modern life and uses it to harm innocent people," he said.

Bin Laden orchestrated the 9/11 attacks in 2001, where al-Qaeda terrorists took control of planes to kill thousands of people on American soil.

Mr Schmidt proposed a balance between government oversight of AI development and over-regulation of the sector.

"The truth is that AI and the future is largely going to be built by private companies," Mr Schmidt said.

"It's really important that governments understand what we're doing and keep their eye on us."

He added: "We're not arguing that we should unilaterally be able to do these things without oversight, we think it should be regulated."

He was speaking from Paris, where the AI Action Summit finished with the US and UK refusing to sign the agreement.

US Vice President JD Vance said regulation would "kill a transformative industry just as it's taking off".

Mr Schmidt said the result of too much regulation in Europe "is that the AI revolution, which is the most important revolution in my opinion since electricity, is not going to be invented in Europe."

He also said the large tech companies "did not understand 15 years ago" the potential that AI had, but does now.

"My experience with the tech leaders is that they do have an understanding of the impact they're having, but they might make a different values judgment than the government would make," he said.

Smartphone ban for children

Mr Schmidt was head of Google when the company bought Android, the company which now makes the most-used mobile phone operating system in the world.

He now supports initiatives to keep phones out of schools.

"I'm one of the people who did not understand, and I'll take responsibility that the world does not work perfectly the way us tech people think it is," he said.

"The situation with children is particularly disturbing to me."

"I think smartphones with a kid can be safe," he said, "they just need to be moderated... we can all agree that children should be protected from the bad of the online world."

On social media - where he has supported proposals for a ban on children under 16 - he added: "Why would we run such a large, uncontrolled experiment on the most important people in the world, which is the next generation?"

Campaigners for limiting children's smartphone usage argue phones are addictive and "have lured children away from the activities that are indispensable to healthy development".

Australia's parliament passed a law to ban social media use for under-16s in 2024, with Prime Minister Anthony Albanese saying it was important to protect children from its "harms".

A recent study published in the medical journal The Lancet suggested that mobile phone bans in schools did not improve students' behaviour or grades.

But it did find that spending longer on smartphones and social media in general was linked with worse results for all of those measures.

BBC

This is definitely one concern. Bad people can use AIs for evil purposes.

Israel also used AI on innocent Gaza civilians I believe.
 

Chinese universities start teaching DeepSeek AI courses​


Chinese universities are launching AI courses based on the country's groundbreaking startup DeepSeek.

In January, the company sent shockwaves through the West when it unveiled an AI model as powerful as ChatGPT that can run at a fraction of the cost.

Now, Shenzhen University in southern Guangdong province has confirmed it is launching an artificial intelligence course based on DeepSeek that will help students learn about key technologies.

Students will also study security, privacy and other challenges posed by artificial intelligence.

The move comes as Chinese authorities aim to boost scientific and technological innovation in schools and universities that can create new sources of growth.

The course will "explore how to find a balance between technological innovation and ethical norms", said the university.

Three other universities confirmed to Reuters that DeepSeek is being embedded into their courses, as Chinese authorities push for growth in the country's tech sector.

On Monday, China's premier Xi Jinping held a rare meeting of the country's tech leaders.

He encouraged them to "show their talent" and be confident in the power of China's model and market.

One analyst said the meeting was a reflection of the Chinese government's concern that it is lagging behind the US when it comes to technology.

"It's a tacit acknowledgement that the Chinese government needs private-sector firms for its tech rivalry with the United States," said Christopher Beddor, deputy China research director at Gavekal Dragonomics in Hong Kong.

 
Humanoid Robots Stride into The Future with World’s First Half-Marathon
Step by mechanical step, dozens of humanoid robots took to the streets of Beijing early on Saturday, joining thousands of their flesh-and-blood counterparts in a world-first half marathon showcasing China’s drive to lead the global race in cutting-edge technology.

The 21-kilometre (13-mile) event held in the Chinese capital’s E-Town — a state-backed high-tech manufacturing hub — was billed as a groundbreaking effort to test the limits of bipedal robots in real-world conditions.

At the crack of the starter’s gun, the robots began taking their first tentative steps as the Chinese pop song “I Believe” blared out from loudspeakers.

Curious human runners lined the roadside, phones in hand ready to photograph each machine as it began the race.

One smaller-sized android fell over and lay on the ground for several minutes, before getting up by itself to loud cheers.

Another, powered by propellers, veered across the starting line before crashing into a barrier and knocking over an engineer.

Crossing the finish line first despite a mid-race fall was the tallest droid and one of the heaviest in the competition. At 180 centimetres (5.9 feet) tall and weighing 52 kilograms (114.6 pounds), the metallic black “Tiangong Ultra” finished in two hours, 40 minutes and 42 seconds.

The men’s and women’s winners, both from Ethiopia, finished in one hour, two minutes and 36 seconds, and in one hour, 11 minutes and seven seconds respectively, according to state media.

Tang Jian, chief technology officer of the Beijing Humanoid Robot Innovation Center which developed “Tiangong”, told reporters the company was “very happy with the results”.

“We had set three goals for ourselves: first, to win the championship; second, to complete the entire half marathon with a single robot — a very important goal for us; and third, to finish the race in under three hours,” he said.

“We collected real-world running data from professional athletes and trained the robot so that its gait, cadence, stride length, and various postures could match those of professional runners as closely as possible.”

The Beijing Humanoid Robot Innovation Center, first established by the government, is now owned by Chinese tech firms Xiaomi Robotics and UBTech Robotics as well as two state-owned companies, according to business data provider Tianyancha.

“My daughter… got up really early and asked to come watch the robot marathon,” spectator Huang Xiaoyu told AFP, holding her child.

“It was quite a breathtaking experience — we were able to see some of the most cutting-edge robots in our country.”

Around 20 teams from across China participated in the competition, with robots ranging from 75 to 180 centimetres tall and weighing up to 88 kilograms.

Some jogged autonomously, while others were guided remotely by their engineers. Robots and human participants ran on separate tracks.

“Getting onto the race track might seem like a small step for humans, but it’s a giant leap for humanoid robots,” Liang Liang, Beijing E-Town’s management committee deputy director, told AFP.

Engineers said the goal was to test the performance and reliability of the androids — emphasising that finishing the race, not winning it, was the main objective.

“There are very few opportunities for the whole industry to run at full speed over such a long distance or duration,” Cui Wenhao, a 28-year-old engineer at Noetix Robotics, told AFP.

“It’s a serious test for the battery, the motors, the structure — even the algorithms.”

Kong Yichang, a 25-year-old engineer from DroidUp, said the race would help to “lay a foundation for a whole series of future activities involving humanoid robots”.

China, the world’s second-largest economy, has sought to assert its dominance in the fields of artificial intelligence and robotics, positioning itself as a direct challenger to the United States.

In January, Chinese start-up DeepSeek drew attention with a chatbot it claimed was developed more cost-effectively than its US counterparts.

Dancing humanoid robots also captivated audiences during a televised Chinese New Year gala.

Source: https://wenewsenglish.pk/humanoid-robots-stride-into-the-future-with-worlds-first-half-marathon/
 
This AI is ground breaking invention for humanity, ChatGpt acts as a guardian to its user. Keep records of all the info about a person and blah blab
 
This AI is ground breaking invention for humanity, ChatGpt acts as a guardian to its user. Keep records of all the info about a person and blah blab
yes and based on their search and method of asking questions they will get recommendations on shopping very soon
 
Back
Top