Follow along with the video below to see how to install our site as a web app on your home screen.
Note: this_feature_currently_requires_accessing_site_using_safari
Why what? The CTO couldnt promise they didn’t use youtube data…But why?
New version of CHAT gpt will be free as well.Chatgpt free version is not worth using anymore. Old information and wrong most of the time. People can use it but you have to be very careful quoting it because you might end up being wrong.
It’s transformative, AI will add huge value to workers that have knowledge, as I have said before on this thread I prompted chat GPt to write a code with healthcare standards, I saw an issue I prompted to correct its mistake and rewrite it, and it apologised and gave me a correct answer.IMO it will not add much value, because AI can cause error
Garbage in.... garbage out..AI systems could be on the verge of collapsing into nonsense, scientists warn
AI systems could collapse into nonsense as more of the internet gets filled with content made by artificial intelligence, researchers have warned.
Recent years have seen increased excitement about text-generating systems such as OpenAI’s ChatGPT. That excitement has led many to publish blog posts and other content created by those systems, and ever more of the internet has been produced by AI.
Many of the companies producing those systems use text taken from the internet to train them, however. That may lead to a loop in which the same AI systems being used to produce that text are then being trained on it.
That could quickly lead those AI tools to fall into gibberish and nonsense, researchers have warned in a new paper. Their warnings come amid a more general worry about the “dead internet theory”, which suggests that more and more of the web is becoming automated in what could be a vicious cycle.
It takes only a few cycles of both generating and then being trained on that content for those systems to produce nonsense, according to the research.
They found that one system tested with text about medieval architecture only needed nine generations before the output was just a repetitive list of jackrabbits, for instance.
The concept of AI being trained on datasets that was also created by AI and then polluting their output has been referred to as “model collapse”. Researchers warn that it could become increasingly prevalent as AI systems are used more across the internet.
It happens because as those systems produce data and are then trained on it, the less common parts of the data tends to left out. Researcher Emily Wenger, who did not work on the study, used the example of a system trained on pictures of different dog breeds: if there are more golden retrievers in the original data, then it will pick those out, and as the process goes round those other dogs will eventually be left out entirely – before the system falls apart and just generates nonsense.
The same effect happens with large language models like those that power ChatGPT and Google’s Gemini, the researchers found.
That could be a problem not only because the systems eventually become useless, but also because they will gradually become less diverse in their outputs. As the data is produced and recycled, the systems may fail to reflect all of the variety of the world, and smaller groups or outlooks might be erased entirely.
The problem “must be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web”, the researchers write in their paper. It might also mean that those companies that have already scraped data to train their systems could be in a beneficial position, since data taken earlier will have more genuine human output in it.
The problem could be fixed with a range of possible solutions including watermarking output so that it can be spotted by automated systems and then filtered out of those training sets. But it is easy to remove those watermarks and AI companies have been resistant to working together to use it, among other issues.
AI systems could be on the verge of collapsing, scientists warn
‘Model collapse’ could make systems such as ChatGPT less useful, researchers saywww.independent.co.uk
This is beginning to sound a lot like the Luddite rebellion against mechanical spinning, weaving and shearing mills during the Industrial revolution. Fighting against the inevitable. I expect that in 20 years. The profession of animation including creators, performers etc. will be an extremely niche one. Like handicrafts guys today.Video game performers go on strike over AI
Major video game makers - like Activision, Warner Bros and Walt Disney - are facing a strike by Hollywood performers over the use of artificial intelligence (AI).
It follows a year and half of talks over a new a contract between the companies and a union representing more than 2,500 video game performers.
The two sides say they have agreed on several key issues, such as wages and job safety, but protections related to the use of AI technology remain a major hurdle.
The industrial action was called by the Screen Actors Guild-American Federation of Television and Radio Artists (Sag-Aftra), which last year paralysed Hollywood with a strike by film and television actors.
The performers are worried about gaming studios using generative AI to reproduce their voices and physical appearance to animate video game characters without providing them with fair compensation.
"Although agreements have been reached on many issues... the employers refuse to plainly affirm, in clear and enforceable language, that they will protect all performers covered by this contract in their AI language," Sag-Aftra said in a statement.
“We’re not going to consent to a contract that allows companies to abuse AI to the detriment of our members," it added.
However, the video game studios have said that they have already made enough concessions to the union's demands.
“We are disappointed the union has chosen to walk away when we are so close to a deal," said Audrey Cooling, a spokesperson for the 10 video game producers negotiating with Sag-Aftra.
"Our offer is directly responsive to Sag-Aftra’s concerns and extends meaningful AI protections that include requiring consent and fair compensation to all performers working under the [Interactive Media Agreement]," she added.
The Interactive Media Agreement covers artists who provide voiceover services and on-camera work used to create video game characters.
The last such deal, which did not provide AI protections, was due to expire in November 2022 but has been extended on a monthly based while talks continued.
Last year, TV and film actors in the US won $1bn (£790m) in new pay and benefits, as well as safeguards on the use of AI, following a strike organised by Sag-Aftra.
The 118-day shutdown was the longest in the union's 90-year history.
Combined with a separate writers' strike, the actions severely disrupted film and TV production and cost California's economy more than $6.5bn, according to entertainment industry publication Deadline.
BBC
Yeah that’s unfortunate because AI is transformative and they have used all existing data to come to that result.This is beginning to sound a lot like the Luddite rebellion against mechanical spinning, weaving and shearing mills during the Industrial revolution. Fighting against the inevitable. I expect that in 20 years. The profession of animation including creators, performers etc. will be an extremely niche one. Like handicrafts guys today.
Argentina will use AI to ‘predict future crimes’ but experts worry for citizens’ rights
Argentina’s security forces have announced plans to use artificial intelligence to “predict future crimes” in a move experts have warned could threaten citizens’ rights.
The country’s far-right president Javier Milei this week created the Artificial Intelligence Applied to Security Unit, which the legislation says will use “machine-learning algorithms to analyse historical crime data to predict future crimes”. It is also expected to deploy facial recognition software to identify “wanted persons”, patrol social media, and analyse real-time security camera footage to detect suspicious activities.
While the ministry of security has said the new unit will help to “detect potential threats, identify movements of criminal groups or anticipate disturbances”, the Minority Report-esque resolution has sent alarm bells ringing among human rights organisations.
Experts fear that certain groups of society could be overly scrutinised by the technology, and have also raised concerns over who – and how many security forces – will be able to access the information.
Amnesty International warned that the move could infringe on human rights. “Large-scale surveillance affects freedom of expression because it encourages people to self-censor or refrain from sharing their ideas or criticisms if they suspect that everything they comment on, post, or publish is being monitored by security forces,” said Mariela Belski, the executive director of Amnesty International Argentina.
Meanwhile, the Argentine Center for Studies on Freedom of Expression and Access to Information said such technologies have historically been used to “profile academics, journalists, politicians and activists”, which, without supervision, threatens privacy.
Milei, a far-right libertarian, rose to power late last year and has promised a hardline response to tackling crime. His security minister Patricia Bullrich reportedly seeks to replicate El Salvador’s controversial prison model, while the administration is moving towards militarising security policy, according to the Center for Legal and Social Studies. The government has also cracked down on protests, with riot police recently shooting tear gas and rubber bullets at demonstrators at close range, and officials threatening to sanction parents who bring children to marches.
The latest measure has prompted an especially strong reaction in a country with a dark history of state repression; an estimated 30,000 people were forcibly disappeared during its brutal 1976-83 dictatorship, some thrown alive from planes on so-called “death flights”. Thousands were also tortured, and hundreds of children kidnapped.
A ministry of security source said that the new unit will work under the current legislative framework, including the Personal Information Protection Act mandate. It added that it will concentrate in applying AI, data analytics and machine learning to identify criminal patterns and trends in the ministry of security databases.
Argentina will use AI to ‘predict future crimes’ but experts worry for citizens’ rights
President Javier Milei creates security unit as some say certain groups may be overly scrutinized by the technologywww.theguardian.com