Chat GPT and artificial intelligence

When natural stupidity & buffoonery is winning , do we really need AI ?
 

Attachments

  • modi laughing.jpg
    modi laughing.jpg
    128 KB · Views: 0
Technology behind ChatGPT better with eye problem advice than non-specialist doctors, study test finds

The technology behind ChatGPT scored better at assessing eye problems and providing advice than non-specialist doctors, a new study has found.

A study led by the University of Cambridge has found that GPT-4, the large language model (LLM) developed by OpenAI, performed nearly as well as specialist eye doctors in a written multiple-choice test.

The AI model, which is known for generating text based on the vast amount of data it is trained on, was tested against doctors at different stages of their careers, including junior doctors without a specialism, as well as trainee and expert eye doctors.

Each group was presented with dozens of scenarios where patients have a specific eye problem, and asked to give a diagnosis or advise on treatment by selecting from one of four options.

The test was based on written questions, taken from a textbook used to test trainee eye doctors, about a range of eye problems - including sensitivity to light, decreased vision, lesions, and itchy eyes.

The textbook on which the questions are based is not publicly available, so researchers believe it is unlikely the large language model has been trained on its contents.

GPT-4 scored significantly higher than junior doctors, whose level of specialism is comparable to general practitioners, at the test.

The model achieved similar scores to trainee and expert eye doctors, but it was beaten by the top-performing experts.

The research was conducted last year using the latest available large language models.

The study also tested GPT-3.5, an earlier version of OpenAI's model, Google's PaLM2, and Meta's LLaMA on the same set of questions. GPT-4 gave more accurate responses than any of the other models.

The researchers have said that large language models will not replace doctors, but they could improve the healthcare system and reduce waiting lists by supporting doctors to deliver care to more patients in the same amount of time.


SKY News
 

Qatar Airways introduces world's first human-like AI cabin crew​

Qatar Airways' Sama 2.0, first human-like AI cabin crew, will participate in the Dubai World Trade Centre annual exhibition from May 6 to 9, 2024, at the Qatari Airways pavilion in Hall No.2.

Khaleej Times reported that visitors at Dubai's Arabian Travel Market (ATM) will get a chance to meet, interact and engage with the second generation of the world’s first AI-powered cabin crew.

The artificial intelligence human-like cabin crew will interact and answer real-time questions of visitors helping travellers design curated travel experiences, and find answers related to FAQs, destinations, support tips and much more during next week's event.

Meanwhile, Qatar Airways’ customers can also virtually interact with Sama 2.0 through the airline’s immersive digital platform QVerse or its app.

It is interesting to know that Sophia, another humanoid robot, has been hitting the headlines over the past few years. Saudi Arabia became the first country in the world to grant citizenship to a robot – Sophia – in 2017.

Source: SAMAA
 
Holocaust researchers use AI to search for unnamed victims

Researchers in Israel are turning to artificial intelligence (AI) to comb through piles of records to try to identify hundreds of thousands of Jewish people killed in the Holocaust whose names are missing from official memorials.

More than six million Jews were murdered by the Nazis during the Second World War, a genocide commemorated across the world on Monday on Yom HaShoah, or Holocaust Remembrance Day.

In the build-up to those commemorations, staff at the Yad Vashem World Holocaust Remembrance Centre in Jerusalem said they were working to step up searches for details of known and unknown victims after developing their own AI-powered software.

Over the years, volunteers have tracked down information on 4.9 million individuals by reading through statements and documents, checking film footage, cemeteries and other records.

"It's very hard for a human being to do it - just to go over everything and not miss any details," Esther Fuxbrumer, head of software development at the centre, told Reuters.

There are huge gaps in their existing 9 million records. The Nazis "just took people, shot them, and covered them in a pit. And there was no one left to tell about them," Fuxbrumer said.

And then there is the mammoth task of linking individuals to dates and family members and other details, watching for duplicates and comparing accounts.


 
ChatGPT maker OpenAI said on Monday it would release a new AI model called GPT-4o, capable of realistic voice conversation and able to interact across text and image, its latest move to stay ahead in a race to dominate the emerging technology.

New audio capabilities enable users to speak to ChatGPT and obtain real-time responses with no delay, as well as interrupt ChatGPT while it is speaking, both hallmarks of realistic conversations that AI voice assistants have found challenging, the OpenAI researchers showed at a livestream event.

Reuters
 

Macron: French AI boom could help EU close tech innovation gap with U.S. and China​


Europe is pursuing AI innovation, tech regulation and competition with China in very different ways than the United States, French President Emmanuel Macron said this week, as the continent seeks to become the third major global tech force in what is now a U.S.-China dominated landscape.

“It’s insane to have a world where the big giants just come from China and the U.S,” Macron told CNBC’s Andrew Ross Sorkin in an exclusive interview Tuesday in Paris.

“We need much more European big players, and I think Mistral AI can be one of them,” Macron said of France’s leading AI company. Microsoft
recently invested 15 million euros ($16.3 million) into Mistral.

Macron also praised H, the newly launched French AI startup that announced this week it had raised a massive $220 million from its initial round of financing.

“I think it’s even good for the U.S. ecosystem to have a very vivid, vibrant and ambitious European ecosystem,” he said.

Macron spoke to CNBC as technology leaders descended on Paris for the VivaTech innovation trade show. On Tuesday, the Elysee Palace hosted a group of business leaders and engineers in AI, on the eve of the show.

The trade show and the meetings came on the heels of a wave of new private investments in the country, led by a commitment from Microsoft
of 4 billion euros, its largest ever to France.

“The more that AI companies decide to locate in Europe, he said, “the more the European governments will be in the same situation as the U.S. and the Chinese governments.”

“Our challenge for AI is accelerate, innovate and invest, and on the other side, regulating at the appropriate scale,” he added.

The European Union is ahead of the United States in regulating artificial intelligence, having passed the first major set of regulatory rules in March with the EU Artificial Intelligence Act.

Macron also defended the strict European Union online privacy regulations, and he rejected the widely held view in Washington that Brussels is deliberately trying to undermine the dominant positions of U.S. tech giants like Google and Meta through a kind of competition-by-regulation strategy.

“I think it’s wrong,” said Macron. “If I want to provide a guarantee on your privacy, storage of your data, the view of your cloud, this is a sovereign and very important democratic issue.”

He compared allowing American tech giants to operate under U.S. regulations while in Europe, to allowing a French bank in the United States to ignore American banking regulations.

“The point is, I don’t regulate you when you’re operating in the U.S. But just be sure that when you operate in the European continent, you have to respect European rules.”

When it comes to China, however, Macron implied that he thought some U.S. tech regulations had gone too far.

He said France, for example, does not see a significant national security threat arising from TikTok, the massive social media app owned by China-based ByteDance.

Under a U.S. law recently passed in the name of national security, ByteDance will be required to divest TikTok in order to continue operating on American devices.

“We didn’t use this approach, and we are neutral in terms of technology, nationality, and players,” said Macron.

“Look, I think China is a competitor when you speak about trade, innovation and economy. I think the pity is that we should work much more collectively in order to push them to be compliant with international rules, instead of deciding ourselves not to respect international rules ourselves,” he said.

“They compete and they are quite good in terms of creating innovations and producing,” he said. “We were too naive till now, and today Europe is less productive to its economies [than] the U.S.”

 

AI products like ChatGPT much hyped but not much used, study says​

Very few people are regularly using "much hyped" artificial intelligence (AI) products like ChatGPT, a survey suggests.
Researchers surveyed 12,000 people in six countries, including the UK, with only 2% of British respondents saying they use such tools on a daily basis.

But the study, from the Reuters Institute and Oxford University, says young people are bucking the trend, with 18 to 24-year-olds the most eager adopters of the tech.

Dr Richard Fletcher, the report's lead author, told the BBC there was a "mismatch" between the "hype" around AI and the "public interest" in it.

The study examined views on generative AI tools - the new generation of products that can respond to simple text prompts with human-sounding answers as well as images, audio and video.

Generative AI burst into the public consciousness when ChatGPT was launched in November 2022.

The attention OpenAI's chatbot attracted set off an almighty arms race among tech firms, who ever since have been pouring billions of dollars into developing their own generative AI features.

However this research indicates that, for all the money and attention lavished on generative AI, it is yet to become part of people’s routine internet use.

"Large parts of the public are not particularly interested in generative AI, and 30% of people in the UK say they have not heard of any of the most prominent products, including ChatGPT," Dr Fletcher said.

 
Why would everyone use it? Everyone doesn’t even use google or Microsoft Office products properly.

Real change will be now with voice assistants.
Ai is worth the hype , even more than cloud computing.

Bbc has gone down the drain , using 12k people to make a claim
 
Chatgpt free version is not worth using anymore. Old information and wrong most of the time. People can use it but you have to be very careful quoting it because you might end up being wrong.
 

China opens 1st AI hospital town to treat patients in virtual world​


China has opened its first artificial intelligence (AI) hospital town, where patients are treated in a virtual world by AI-generated doctors, state media reported on Wednesday.

“The concept of an AI hospital town, where virtual patients are treated by AI doctors, holds immense significance for both medical professionals and the general public. The AI hospital aims to train doctor agents through a simulated environment so that it can autonomously evolve and improve its ability to treat disease,” the Beijing-based Global Times reported, citing recent interviews with Chinese researchers.

The researchers shed light on the practical implications of this novel approach to health care.

Tsinghua University researchers recently created the "Agent Hospital" in this virtual world, where all doctors, nurses, and patients are controlled by intelligent agents powered by large language models (LLMs) that can interact autonomously.

AI doctors can treat 10,000 patients in a few days, a task that would take human doctors at least two years.

Research team leader of the Agent Hospital, Liu Yang, said this innovative method enables real doctors to treat virtual patients while also providing medical students with better training.

By simulating various AI patients, students can confidently propose treatment plans without risking harm to real patients, Liu stressed.

"AI hospital town can simulate and predict various medical scenarios, such as the spread, development and control of infectious diseases in a region," he added.

Liu announced that after six months of development, the AI hospital town is nearing readiness for practical use, aiming to be operational by the second half of 2024.

 
Chatgpt free version is not worth using anymore. Old information and wrong most of the time. People can use it but you have to be very careful quoting it because you might end up being wrong.
New version of CHAT gpt will be free as well.
Also one should never quote AI..
If you need references to articles I would suggest Copilot.
 
IMO it will not add much value, because AI can cause error
It’s transformative, AI will add huge value to workers that have knowledge, as I have said before on this thread I prompted chat GPt to write a code with healthcare standards, I saw an issue I prompted to correct its mistake and rewrite it, and it apologised and gave me a correct answer.

That was remarkable for me, but ofcourse one needs to have knowledge on what they are requesting.
 
AI systems could be on the verge of collapsing into nonsense, scientists warn

AI systems could collapse into nonsense as more of the internet gets filled with content made by artificial intelligence, researchers have warned.

Recent years have seen increased excitement about text-generating systems such as OpenAI’s ChatGPT. That excitement has led many to publish blog posts and other content created by those systems, and ever more of the internet has been produced by AI.

Many of the companies producing those systems use text taken from the internet to train them, however. That may lead to a loop in which the same AI systems being used to produce that text are then being trained on it.

That could quickly lead those AI tools to fall into gibberish and nonsense, researchers have warned in a new paper. Their warnings come amid a more general worry about the “dead internet theory”, which suggests that more and more of the web is becoming automated in what could be a vicious cycle.

It takes only a few cycles of both generating and then being trained on that content for those systems to produce nonsense, according to the research.

They found that one system tested with text about medieval architecture only needed nine generations before the output was just a repetitive list of jackrabbits, for instance.

The concept of AI being trained on datasets that was also created by AI and then polluting their output has been referred to as “model collapse”. Researchers warn that it could become increasingly prevalent as AI systems are used more across the internet.

It happens because as those systems produce data and are then trained on it, the less common parts of the data tends to left out. Researcher Emily Wenger, who did not work on the study, used the example of a system trained on pictures of different dog breeds: if there are more golden retrievers in the original data, then it will pick those out, and as the process goes round those other dogs will eventually be left out entirely – before the system falls apart and just generates nonsense.

The same effect happens with large language models like those that power ChatGPT and Google’s Gemini, the researchers found.

That could be a problem not only because the systems eventually become useless, but also because they will gradually become less diverse in their outputs. As the data is produced and recycled, the systems may fail to reflect all of the variety of the world, and smaller groups or outlooks might be erased entirely.

The problem “must be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web”, the researchers write in their paper. It might also mean that those companies that have already scraped data to train their systems could be in a beneficial position, since data taken earlier will have more genuine human output in it.

The problem could be fixed with a range of possible solutions including watermarking output so that it can be spotted by automated systems and then filtered out of those training sets. But it is easy to remove those watermarks and AI companies have been resistant to working together to use it, among other issues.

 
AI systems could be on the verge of collapsing into nonsense, scientists warn

AI systems could collapse into nonsense as more of the internet gets filled with content made by artificial intelligence, researchers have warned.

Recent years have seen increased excitement about text-generating systems such as OpenAI’s ChatGPT. That excitement has led many to publish blog posts and other content created by those systems, and ever more of the internet has been produced by AI.

Many of the companies producing those systems use text taken from the internet to train them, however. That may lead to a loop in which the same AI systems being used to produce that text are then being trained on it.

That could quickly lead those AI tools to fall into gibberish and nonsense, researchers have warned in a new paper. Their warnings come amid a more general worry about the “dead internet theory”, which suggests that more and more of the web is becoming automated in what could be a vicious cycle.

It takes only a few cycles of both generating and then being trained on that content for those systems to produce nonsense, according to the research.

They found that one system tested with text about medieval architecture only needed nine generations before the output was just a repetitive list of jackrabbits, for instance.

The concept of AI being trained on datasets that was also created by AI and then polluting their output has been referred to as “model collapse”. Researchers warn that it could become increasingly prevalent as AI systems are used more across the internet.

It happens because as those systems produce data and are then trained on it, the less common parts of the data tends to left out. Researcher Emily Wenger, who did not work on the study, used the example of a system trained on pictures of different dog breeds: if there are more golden retrievers in the original data, then it will pick those out, and as the process goes round those other dogs will eventually be left out entirely – before the system falls apart and just generates nonsense.

The same effect happens with large language models like those that power ChatGPT and Google’s Gemini, the researchers found.

That could be a problem not only because the systems eventually become useless, but also because they will gradually become less diverse in their outputs. As the data is produced and recycled, the systems may fail to reflect all of the variety of the world, and smaller groups or outlooks might be erased entirely.

The problem “must be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web”, the researchers write in their paper. It might also mean that those companies that have already scraped data to train their systems could be in a beneficial position, since data taken earlier will have more genuine human output in it.

The problem could be fixed with a range of possible solutions including watermarking output so that it can be spotted by automated systems and then filtered out of those training sets. But it is easy to remove those watermarks and AI companies have been resistant to working together to use it, among other issues.

Garbage in.... garbage out..
 
Video game performers go on strike over AI

Major video game makers - like Activision, Warner Bros and Walt Disney - are facing a strike by Hollywood performers over the use of artificial intelligence (AI).

It follows a year and half of talks over a new a contract between the companies and a union representing more than 2,500 video game performers.

The two sides say they have agreed on several key issues, such as wages and job safety, but protections related to the use of AI technology remain a major hurdle.

The industrial action was called by the Screen Actors Guild-American Federation of Television and Radio Artists (Sag-Aftra), which last year paralysed Hollywood with a strike by film and television actors.

The performers are worried about gaming studios using generative AI to reproduce their voices and physical appearance to animate video game characters without providing them with fair compensation.

"Although agreements have been reached on many issues... the employers refuse to plainly affirm, in clear and enforceable language, that they will protect all performers covered by this contract in their AI language," Sag-Aftra said in a statement.

“We’re not going to consent to a contract that allows companies to abuse AI to the detriment of our members," it added.

However, the video game studios have said that they have already made enough concessions to the union's demands.

“We are disappointed the union has chosen to walk away when we are so close to a deal," said Audrey Cooling, a spokesperson for the 10 video game producers negotiating with Sag-Aftra.

"Our offer is directly responsive to Sag-Aftra’s concerns and extends meaningful AI protections that include requiring consent and fair compensation to all performers working under the [Interactive Media Agreement]," she added.

The Interactive Media Agreement covers artists who provide voiceover services and on-camera work used to create video game characters.

The last such deal, which did not provide AI protections, was due to expire in November 2022 but has been extended on a monthly based while talks continued.

Last year, TV and film actors in the US won $1bn (£790m) in new pay and benefits, as well as safeguards on the use of AI, following a strike organised by Sag-Aftra.

The 118-day shutdown was the longest in the union's 90-year history.

Combined with a separate writers' strike, the actions severely disrupted film and TV production and cost California's economy more than $6.5bn, according to entertainment industry publication Deadline.

BBC
 
Video game performers go on strike over AI

Major video game makers - like Activision, Warner Bros and Walt Disney - are facing a strike by Hollywood performers over the use of artificial intelligence (AI).

It follows a year and half of talks over a new a contract between the companies and a union representing more than 2,500 video game performers.

The two sides say they have agreed on several key issues, such as wages and job safety, but protections related to the use of AI technology remain a major hurdle.

The industrial action was called by the Screen Actors Guild-American Federation of Television and Radio Artists (Sag-Aftra), which last year paralysed Hollywood with a strike by film and television actors.

The performers are worried about gaming studios using generative AI to reproduce their voices and physical appearance to animate video game characters without providing them with fair compensation.

"Although agreements have been reached on many issues... the employers refuse to plainly affirm, in clear and enforceable language, that they will protect all performers covered by this contract in their AI language," Sag-Aftra said in a statement.

“We’re not going to consent to a contract that allows companies to abuse AI to the detriment of our members," it added.

However, the video game studios have said that they have already made enough concessions to the union's demands.

“We are disappointed the union has chosen to walk away when we are so close to a deal," said Audrey Cooling, a spokesperson for the 10 video game producers negotiating with Sag-Aftra.

"Our offer is directly responsive to Sag-Aftra’s concerns and extends meaningful AI protections that include requiring consent and fair compensation to all performers working under the [Interactive Media Agreement]," she added.

The Interactive Media Agreement covers artists who provide voiceover services and on-camera work used to create video game characters.

The last such deal, which did not provide AI protections, was due to expire in November 2022 but has been extended on a monthly based while talks continued.

Last year, TV and film actors in the US won $1bn (£790m) in new pay and benefits, as well as safeguards on the use of AI, following a strike organised by Sag-Aftra.

The 118-day shutdown was the longest in the union's 90-year history.

Combined with a separate writers' strike, the actions severely disrupted film and TV production and cost California's economy more than $6.5bn, according to entertainment industry publication Deadline.

BBC
This is beginning to sound a lot like the Luddite rebellion against mechanical spinning, weaving and shearing mills during the Industrial revolution. Fighting against the inevitable. I expect that in 20 years. The profession of animation including creators, performers etc. will be an extremely niche one. Like handicrafts guys today.
 
Don’t you ever worry the Matrix will find a new labour job for you. They will keep figuring out the right jobs they can exploit your labour for.
 

Argentina will use AI to ‘predict future crimes’ but experts worry for citizens’ rights​


Argentina’s security forces have announced plans to use artificial intelligence to “predict future crimes” in a move experts have warned could threaten citizens’ rights.

The country’s far-right president Javier Milei this week created the Artificial Intelligence Applied to Security Unit, which the legislation says will use “machine-learning algorithms to analyse historical crime data to predict future crimes”. It is also expected to deploy facial recognition software to identify “wanted persons”, patrol social media, and analyse real-time security camera footage to detect suspicious activities.

While the ministry of security has said the new unit will help to “detect potential threats, identify movements of criminal groups or anticipate disturbances”, the Minority Report-esque resolution has sent alarm bells ringing among human rights organisations.

Experts fear that certain groups of society could be overly scrutinised by the technology, and have also raised concerns over who – and how many security forces – will be able to access the information.

Amnesty International warned that the move could infringe on human rights. “Large-scale surveillance affects freedom of expression because it encourages people to self-censor or refrain from sharing their ideas or criticisms if they suspect that everything they comment on, post, or publish is being monitored by security forces,” said Mariela Belski, the executive director of Amnesty International Argentina.

Meanwhile, the Argentine Center for Studies on Freedom of Expression and Access to Information said such technologies have historically been used to “profile academics, journalists, politicians and activists”, which, without supervision, threatens privacy.

Milei, a far-right libertarian, rose to power late last year and has promised a hardline response to tackling crime. His security minister Patricia Bullrich reportedly seeks to replicate El Salvador’s controversial prison model, while the administration is moving towards militarising security policy, according to the Center for Legal and Social Studies. The government has also cracked down on protests, with riot police recently shooting tear gas and rubber bullets at demonstrators at close range, and officials threatening to sanction parents who bring children to marches.

The latest measure has prompted an especially strong reaction in a country with a dark history of state repression; an estimated 30,000 people were forcibly disappeared during its brutal 1976-83 dictatorship, some thrown alive from planes on so-called “death flights”. Thousands were also tortured, and hundreds of children kidnapped.

A ministry of security source said that the new unit will work under the current legislative framework, including the Personal Information Protection Act mandate. It added that it will concentrate in applying AI, data analytics and machine learning to identify criminal patterns and trends in the ministry of security databases.

 
This is beginning to sound a lot like the Luddite rebellion against mechanical spinning, weaving and shearing mills during the Industrial revolution. Fighting against the inevitable. I expect that in 20 years. The profession of animation including creators, performers etc. will be an extremely niche one. Like handicrafts guys today.
Yeah that’s unfortunate because AI is transformative and they have used all existing data to come to that result.

The more I see where AI is heading and seeing the models being released every qtr the more I realise all people with original ideas will get sidelined even more.

Irrespective future is learning to utilise LLMs.
 
OpenAI blocks Iranian group's ChatGPT accounts for targeting US election

OpenAI said on Friday it had taken down accounts of an Iranian group for using its ChatGPT chatbot to generate content meant for influencing the U.S. presidential election and other issues.

The operation, identified as Storm-2035, used ChatGPT to generate content focused on topics such as commentary on the candidates on both sides in the U.S. elections, the conflict in Gaza and Israel's presence at the Olympic Games and then shared it via social media accounts and websites.

Investigation by the Microsoft-backed (MSFT.O), opens new tab AI company showed that ChatGPT was used for generating long-form articles and shorter social media comments.

OpenAI said the operation did not appear to have achieved meaningful audience engagement.

Majority of the identified social media posts received few or no likes, shares or comments and the company did not see indications of web articles being shared across social media.

The accounts have been banned from using OpenAI's services and the company continues to monitor activities for any further attempts to violate policies, it said.

Earlier in August, a Microsoft threat-intelligence report said Iranian network Storm-2035, comprising four websites masquerading as news outlets, is actively engaging U.S. voter groups on opposing ends of the political spectrum.

The engagement was being built with "polarizing messaging on issues such as the U.S. presidential candidates, LGBTQ rights, and the Israel-Hamas conflict," the report stated.


 
ChatGPT firm OpenAI strikes deal with Vogue owner

OpenAI and global magazine giant Condé Nast have announced a partnership to allow ChatGPT and its search engine SearchGPT to display content from Vogue, The New Yorker, GQ and other well known publications.

The multi-year deal is the latest such agreement struck by OpenAI with major media firms.

The content produced by media organisations is sought after by technology companies that use it to train their AI (Artificial Intelligence) models.

Some media firms including the New York Times and the Chicago Tribune have resisted this and taken legal action to protect their content.

OpenAI and Condé Nast did not disclose the financial terms of the agreement.

“We’re committed to working with Condé Nast and other news publishers to ensure that as AI plays a larger role in news discovery and delivery, it maintains accuracy, integrity, and respect for quality reporting,” said Brad Lightcap, OpenAI's chief operating officer.

News media organisations have seen their business models challenged by the rise of social media and other digital platforms.

"Our partnership with OpenAI begins to make up for some of that revenue, allowing us to continue to protect and invest in our journalism and creative endeavours," said Condé Nast's chief executive officer Roger Lynch.

OpenAI launched its prototype AI-powered search engine, SearchGPT, last month.

In a statement at the time, the company said it was gathering feedback and insights from its partners in the news industry to develop the new platform.

Others that have partnered with the AI firm include Time Magazine, the Financial Times and the Associated Press.

AI chatbot technology is seen by many analysts as a key part of internet search engines in the future.

Search engine giant Google has also been racing to add AI-powered tools to its products.

While other AI companies are pursuing search products, Google remains by far the dominant player, claiming more than 90% of the global market.

The changes to how search engines respond to queries - offering conversational paragraphs instead of directing users to links - have also raised alarm among news media firms, many of which rely on search traffic for audiences and revenue.

Last year, the BBC said it was taking steps to prevent content on its websites from being used by OpenAI and other firms without permission.

The blog post also said the BBC would explore opportunities offered by generative AI "to deliver more value to our audiences and to society."

BBC
 
AI chip giant Nvidia shares fall despite record sales

Artificial intelligence (AI) chip giant Nvidia says its revenues for the three months to the end of July more than doubled compared to a year earlier, hitting a record $30bn (£24.7bn).

However, the firm's shares fell by more than 6% in New York after the announcement.

Nvidia has been one of the biggest beneficiaries of the AI boom, with its stock market value soaring to more than $3tn.

The company's shares have risen by more than 160% this year alone.

"It’s less about just beating estimates now, markets expect them to be shattered and it’s the scale of the beat today that looks to have disappointed a touch," said Matt Britzman, senior equity analyst at Hargreaves Lansdown.

The sky-high expectations are driven by its valuation, which has surged ninefold in value in under two years thanks to its dominance of the AI chip market.

Profits for the period soared, with operating income rising 174% from the same time last year to $18.6bn.

It was the seventh quarter in a row that Nvidia had beaten analysts' expectations on both sales and profits.

"Generative AI will revolutionise every industry," said Nvidia chief executive Jensen Huang.

The results have become a quarterly event which sends Wall Street into a frenzy of buying and selling shares.

A "watch party" had been planned in Manhattan, according to the Wall Street Journal, while Mr Huang, famed for his signature leather jacket, has been dubbed the "Taylor Swift of tech".

Alvin Nguyen, senior analyst at Forrester, told the BBC both Nvidia and Mr Huang have become the "face of AI".

This has helped the company so far, but it could also hurt its valuation if AI fails to deliver after firms have invested billions of dollars in the technology, Mr Nguyen said.

"A thousand use cases for AI is not enough. You need a million."

Mr Nguyen also said Nvidia's first-mover advantage means it has market-leading products, which its customers have spent decades using and has a "software ecosystem".

He said that rivals, such as Intel, could "chip away" at Nvidia's market share if they developed a better product, though he said this would take time.

BBC
 
Markets slide as Nvidia shares plunge almost 10%

Share prices in Asian and US markets have tumbled as concerns grow that the world's largest economy could be headed towards a recession.

Data showed US manufacturing activity remains subdued, with investors now focussed now on key jobs figures due on Friday.

American chip giant Nvidia was hit particularly hard, slumping by almost 10% as optimism about the boom in artificial intelligence (AI) dampened.

The news came after the US government sent subpoenas to Nvidia and other companies as part of its AI probe.

"Growth concerns are dominating market moves," Julia Lee at FTSE Russell told the BBC.

In New York on Tuesday, the S&P 500 index closed more than 2% lower, while the technology-heavy Nasdaq fell by over 3%.

Nasdaq-listed Nvidia fell by 9.5%, wiping $279bn (£212.9bn) off its stock market valuation.

Other US tech giants — including Alphabet, Apple and Microsoft — also saw their shares tumble.

On Wednesday morning, Japan's Nikkei 225 was down 4.4%, South Korea's Kospi was trading 3% lower and the Hang Seng in Hong Kong dropped by 1.3%.

Major Asian technology firms including TSMC, Samsung Electronics, SK Hynix and Tokyo Electron were sharply lower.

"Concerns around global growth look to be hitting exporting countries in the region particularly hard," Ms Lee added.

The highly-anticipated US non-farm payrolls jobs market report is due to be released on Friday.

Investors will be watching those figures closely for clues on how much the US Federal Reserve will cut interest rates by when officials meet next week.

Swetha Ramachandran, fund manager for Artemis Investment Management in London, said Nvidia’s share price fall may also be a reaction the US Department of Justice was requiring the firm to give evidence over anti-trust issues.

She also said it could a matter of “expectations catching up with reality” for the AI giant.

“[Nvidia] did report results last week where it alluded to a natural and expected deceleration in growth: from having delivered 122% growth in the second quarter it expects to deliver 80% growth in the third quarter.”

She added that the US indexes were likely down because investors were feeling less confident that the Federal Reserve will cut interest rates.

BBC
 
Every country should be extremely cautious after what happened today in Lebanon, AI is so ahead that sometimes it takes days to even understand what happened.

All these chips of Nvidea AMD are definitely prone to intelligence, apple has even named it Apple intelligence
 

OpenAI sees $11.6 billion revenue next year, offers Thrive chance to invest again in 2025​

Sept 27 (Reuters) - Thrive Capital is investing more than $1 billion of OpenAI's current $6.5 billion fundraising round, and it has a sweetener no other investors are getting: the potential to invest another $1 billion next year at the same valuation if the AI firm hits a revenue goal, people familiar with the matter said on Friday.

OpenAI is predicting its revenue will skyrocket to $11.6 billion next year from an estimated $3.7 billion in 2024, the sources said, speaking on condition of anonymity. Losses are expected to be as much as $5 billion this year, depending largely on their spending for computing power that could change, one of the sources added.

The current funding round, which comes in the form of convertible debt, is expected to close by the end of next week and could value OpenAI at $150 billion, cementing its status as one of the most valuable private companies in the world.

That valuation depends on pulling off a complicated restructuring to remove the control of its non-profit board and also remove cap on investment return to investors, a plan first reported by Reuters. There is no specific timeline when the conversion could be completed.

Thrive Capital, which also led OpenAI's previous funding round, is offering $1.2 billion from a combination of its own fund and a special purpose vehicle for smaller investors. Other investors on the new round include Microsoft (MSFT.O), opens new tab, Apple (AAPL.O), opens new tab, Nvidia (NVDA.O), opens new tab and Khosla Ventures.

The others were not given the option for future investment at current price, sources said. OpenAI's valuation has soared quickly, and if it continues to do so, Thrive could find itself increasing its stake next year at a discounted price.

Reuters was not able to determine the revenue target associated with the option for Thrive, which was founded by Joshua Kushner.

Thrive and OpenAI declined to comment.

OpenAI's revenue expectations far exceed CEO Sam Altman's earlier projection of $1 billion in revenue this year. The main revenue sources are sales of its services to corporations and subscriptions to its chatbot.

Its flagship product, ChatGPT, is expected to bring in $2.7 billion in revenue this year, jumping from $700 million in 2023. The chatbot service, which charges a $20 fee every month, has about 10 million paying users.

Source: Reuters
 

California governor blocks landmark AI safety bill​


The governor of California Gavin Newsom has blocked a landmark artificial intelligence (AI) safety bill, which had faced strong opposition from major technology companies.

The proposed legislation would have imposed some of the first regulations on AI in the US.

Mr Newsom said the bill could stifle innovation and prompt AI developers to move out of the state.

Senator Scott Wiener, who authored the bill, said the veto allows companies to continue developing an "extremely powerful technology" without any government oversight.

The bill would have required the most advanced AI models to undergo safety testing.

It would have forced developers to ensure their technology included a so-called "kill switch". A kill switch would allow organisations to isolate and effectively switch off an AI system if it became a threat.

It would also have made official oversight compulsory for the development of so-called "Frontier Models" - or the most powerful AI systems.

The bill "does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data," Mr Newsom said in a statement.

"Instead, the bill applies stringent standards to even the most basic functions - so long as a large system deploys it," he added.

At the same time, Mr Newsom announced plans to protect the public from the risks of AI and asked leading experts to help develop safeguards for the technology.

Over the last few weeks, Mr Newsom has also signed 17 bills, including legislation aimed at cracking down on misinformation and so-called deep fakes, which include images, video, or audio content created using generative AI.

California is home to many of the world's largest and most advanced AI companies, including the ChatGPT maker, OpenAI.

The state's role as a hub for many of the world's largest tech firms means that any bill regulating the sector would have a major national and global impact on the industry.

Mr Wiener said the decision to veto the bill leaves AI companies with "no binding restrictions from US policy makers, particularly given Congress’s continuing paralysis around regulating the tech industry in any meaningful way."

Efforts by Congress to impose safeguards on AI have stalled.

OpenAI, Google and Meta were among several major tech firms that voiced opposition to the the bill and warned it would hinder the development of a crucial technology.

 

Argentina will use AI to ‘predict future crimes’ but experts worry for citizens’ rights​


Argentina’s security forces have announced plans to use artificial intelligence to “predict future crimes” in a move experts have warned could threaten citizens’ rights.

The country’s far-right president Javier Milei this week created the Artificial Intelligence Applied to Security Unit, which the legislation says will use “machine-learning algorithms to analyse historical crime data to predict future crimes”. It is also expected to deploy facial recognition software to identify “wanted persons”, patrol social media, and analyse real-time security camera footage to detect suspicious activities.

While the ministry of security has said the new unit will help to “detect potential threats, identify movements of criminal groups or anticipate disturbances”, the Minority Report-esque resolution has sent alarm bells ringing among human rights organisations.

Experts fear that certain groups of society could be overly scrutinised by the technology, and have also raised concerns over who – and how many security forces – will be able to access the information.

Amnesty International warned that the move could infringe on human rights. “Large-scale surveillance affects freedom of expression because it encourages people to self-censor or refrain from sharing their ideas or criticisms if they suspect that everything they comment on, post, or publish is being monitored by security forces,” said Mariela Belski, the executive director of Amnesty International Argentina.

Meanwhile, the Argentine Center for Studies on Freedom of Expression and Access to Information said such technologies have historically been used to “profile academics, journalists, politicians and activists”, which, without supervision, threatens privacy.

Milei, a far-right libertarian, rose to power late last year and has promised a hardline response to tackling crime. His security minister Patricia Bullrich reportedly seeks to replicate El Salvador’s controversial prison model, while the administration is moving towards militarising security policy, according to the Center for Legal and Social Studies. The government has also cracked down on protests, with riot police recently shooting tear gas and rubber bullets at demonstrators at close range, and officials threatening to sanction parents who bring children to marches.

The latest measure has prompted an especially strong reaction in a country with a dark history of state repression; an estimated 30,000 people were forcibly disappeared during its brutal 1976-83 dictatorship, some thrown alive from planes on so-called “death flights”. Thousands were also tortured, and hundreds of children kidnapped.

A ministry of security source said that the new unit will work under the current legislative framework, including the Personal Information Protection Act mandate. It added that it will concentrate in applying AI, data analytics and machine learning to identify criminal patterns and trends in the ministry of security databases.


This is quite interesting and hopefully should be used in Bharat where we can deploy this software in Kashmir, UP etc to detect behavioural patterns and avoid a child to become a terrorist in future by detecting the level of brainwash they have gone through in local madrassa or through parents.
 
I think finally Indians have become involved in startups being founders and taking risks , many AI startups ofcourse will be copy paste on existing models but the ones solving IT problems will do well.

Good time to be inspired for next gen of Indians, finally breaking generational culture norms.
 
Google turns to nuclear to power AI data centres

Google has signed a deal to use small nuclear reactors to generate the vast amounts of energy needed to power its artificial intelligence (AI) data centres.

The company says the agreement with Kairos Power will see it start using the first reactor this decade and bring more online by 2035.

The companies did not give any details about how much the deal is worth or where the plants will be built.

Technology firms are increasingly turning to nuclear sources of energy to supply the electricity used by the huge data centres that drive AI.


 
Canadian media companies sue OpenAI in case potentially worth billions

Canada’s major news organizations have sued tech firm OpenAI for potentially billions of dollars, alleging the company is strip-mining journalism” and unjustly enriching itself by using news articles to train its popular ChatGPT software.

The suit, filed on Friday in Ontario’s superior court of justice, calls for punitive damages, a share of profits made by OpenAI from using the news organizations’ articles, and an injunction barring the San Francisco-based company from using any of the news articles in the future.

“These artificial intelligence companies cannibalize proprietary content and are free-riding on the backs of news publishers who invest real money to employ real journalists who produce real stories for real people,” said Paul Deegan, president of News Media Canada.

“They are strip-mining journalism while substantially, unjustly and unlawfully enriching themselves to the detriment of publishers.”

The litigants include the Globe and Mail, the Canadian Press, the CBC, the Toronto Star, Metroland Media and Postmedia. They want up to C$20,000 in damages for each article used by OpenAI, suggesting a victory in court could be worth billions.

“The defendants have engaged in ongoing, deliberate and unauthorized misappropriation of the plaintiffs’ valuable news media works. The plaintiffs bring this action to prevent and seek recompense for these unlawful activities,” said the statement of claim filed by the news organizations.

“To obtain the significant quantities of text data needed to develop their GPT models, OpenAI deliberately ‘scrapes’ (ie, accesses and copies) content from the news media companies’ websites … It then uses that proprietary content to develop its GPT models, without consent or authorization.”

None of the claims have been tested in court.

The suit is the latest in a string of battles by Canadian media against American technology companies, including a bitter feud with Facebook parent Meta. Many news outlets in the US, including the New York Times, have also sued OpenAI.

Valued at more than $150bn, OpenAI has already signed licensing agreements with a handful of media organizations, including the Associated Press wire service, NewsCorp and Condé Nast.

The company did not immediately respond to a request for comment.

SOURCE: https://www.theguardian.com/world/2024/nov/29/canada-media-companies-sue-openai-chatgpt
 
Back
Top