Chat GPT and artificial intelligence

Do you think most programming jobs will disappear eventually?

Their will still be a need for Programmers. Because their are new Programming languages coming up all the time. For example, R Programming is fast coming up as a very important language especially in Data Analytics.

It will affect jobs for sure. But not currently because it is impractical for people to be sitting around using AI for everything.
 
I am a programmer and ChatGPT and the introduction of other AI will definitely affect my field massively in the near future.

Who will verify/challenge what Chat GPT is producing? Will financial institutions, software used in Airplanes, used by Docs trust what ChatGPT produces?
 
Who will verify/challenge what Chat GPT is producing? Will financial institutions, software used in Airplanes, used by Docs trust what ChatGPT produces?

That's a very good point. I see it as similar to self service checkout machines. You still need humans guarding the machines as the machines are not self reliant and smart enough to do everything.
 
Who will verify/challenge what Chat GPT is producing? Will financial institutions, software used in Airplanes, used by Docs trust what ChatGPT produces?

Important:
Have a secrete code name between you and your loved ones that can verify its you/them that is on the phone.
 
I have ChatGPT as my assistant when building websites. Saves so much wasted time crawling through stack overflow.
 
Artificial intelligence could lead to the extinction of humanity, experts - including the heads of OpenAI and Google Deepmind - have warned.

Dozens have supported a statement published on the webpage of the Centre for AI Safety.

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war" it reads.

But others say the fears are overblown.

Sam Altman, chief executive of ChatGPT-maker OpenAI, Demis Hassabis, chief executive of Google DeepMind and Dario Amodei of Anthropic have all supported the statement.

The Centre for AI Safety website suggests a number of possible disaster scenarios:

Enfeeblement, where humans become dependent on AI "similar to the scenario portrayed in the film Wall-E"
Dr Geoffrey Hinton, who issued an earlier warning about risks from super-intelligent AI, has also supported the Centre for AI Safety's call.

Yoshua Bengio, professor of computer science at the university of Montreal, also signed.

Dr Hinton, Prof Bengio and NYU Professor Yann LeCun are often described as the "godfathers of AI" for their groundbreaking work in the field - for which they jointly won the 2018 Turing Award, which recognises outstanding contributions in computer science.

But Prof LeCun, who also works at Meta, has said these apocalyptic warnings are overblown tweeting that "the most common reaction by AI researchers to these prophecies of doom is face palming".

'Fracturing reality'

Many other experts similarly believe that fears of AI wiping out humanity are unrealistic, and a distraction from issues such as bias in systems that are already a problem.

Arvind Narayanan, a computer scientist at Princeton University, has previously told the BBC that sci-fi-like disaster scenarios are unrealistic: "Current AI is nowhere near capable enough for these risks to materialise. As a result, it's distracted attention away from the near-term harms of AI".

Oxford's Institute for Ethics in AI senior research associate Elizabeth Renieris told BBC News she worried more about risks closer to the present.

"Advancements in AI will magnify the scale of automated decision-making that is biased, discriminatory, exclusionary or otherwise unfair while also being inscrutable and incontestable," she said. They would "drive an exponential increase in the volume and spread of misinformation, thereby fracturing reality and eroding the public trust, and drive further inequality, particularly for those who remain on the wrong side of the digital divide".

Many AI tools essentially "free ride" on the "whole of human experience to date", Ms Renieris said. Many are trained on human-created content, text, art and music they can then imitate - and their creators "have effectively transferred tremendous wealth and power from the public sphere to a small handful of private entities".

But Centre for AI Safety director Dan Hendrycks told BBC News future risks and present concerns "shouldn't be viewed antagonistically".

"Addressing some of the issues today can be useful for addressing many of the later risks tomorrow," he said.

Superintelligence efforts
Media coverage of the supposed "existential" threat from AI has snowballed since March 2023 when experts, including Tesla boss Elon Musk, signed an open letter urging a halt to the development of the next generation of AI technology.

That letter asked if we should "develop non-human minds that might eventually outnumber, outsmart, obsolete and replace us".

In contrast, the new campaign has a very short statement, designed to "open up discussion".

The statement compares the risk to that posed by nuclear war. In a blog post OpenAI recently suggested superintelligence might be regulated in a similar way to nuclear energy: "We are likely to eventually need something like an IAEA [International Atomic Energy Agency] for superintelligence efforts" the firm wrote.

'Be reassured'

Both Sam Altman and Google chief executive Sundar Pichai are among technology leaders to have discussed AI regulation recently with the prime minister.

Speaking to reporters about the latest warning over AI risk, Rishi Sunak stressed the benefits to the economy and society.

"You've seen that recently it was helping paralysed people to walk, discovering new antibiotics, but we need to make sure this is done in a way that is safe and secure," he said.

"Now that's why I met last week with CEOs of major AI companies to discuss what are the guardrails that we need to put in place, what's the type of regulation that should be put in place to keep us safe.

"People will be concerned by the reports that AI poses existential risks, like pandemics or nuclear wars.

"I want them to be reassured that the government is looking very carefully at this."

He had discussed the issue recently with other leaders, at the G7 summit of leading industrialised nations, Mr Sunak said, and would raise it again in the US soon.

The G7 has recently created a working group on AI.

BBC
 
Pakistani scientists develop AI method to determine citrus fruit sweetness
Pakistan, being the sixth-largest producer of citrus fruits globally, stands to gain from this advancement

A team of Pakistani scientists has made a significant scientific breakthrough by developing an artificial intelligence (AI)-based visual classification method that accurately assesses the sweetness of native citrus fruits.

Led by Dr Ayesha Zeb from the National Centre of Robotics and Automation at the National University of Sciences and Technology (NUST), the team successfully predicted fruit sweetness with over 80 per cent accuracy, without damaging the fruit in the process.

To conduct their experiment, the researchers selected 92 citrus fruits, including Blood Red, Mosambi, and Succari varieties, from a farm in the Chakwal district. They utilised a handheld spectrometer to obtain spectra, which are patterns obtained from the bouncing light, from marked regions on the fruits' skin. The team employed near-infrared (NIR) spectroscopy, a technique that enables the analysis of non-visible light spectra, to examine the fruit samples. Of the 92 fruits, 64 were used for calibration and 28 for prediction via the spectrometer.

While the use of NIR spectroscopy in damage-free fruit classification is not new, the Pakistani team's novel approach involved applying it to model the sweetness of local fruits. Additionally, they integrated artificial intelligence algorithms for direct classification of orange sweetness, resulting in improved accuracy.

...
https://tribune.com.pk/story/241940...ai-method-to-determine-citrus-fruit-sweetness
 
Copenhagen: Danish Prime Minister Mette Frederiksen on Wednesday delivered a speech to parliament partly written using artificial intelligence tool ChatGPT to highlight the revolutionary aspects and risks of AI.

The head of the Danish government was giving a traditional speech as parliament gets ready to close for the summer.

"What I have just read here is not from me. Or any other human for that matter", Frederiksen suddenly said part-way into her speech to legislators, explaining it was written by ChatGPT.

"Even if it didn't always hit the nail on the head, both in terms of the details of the government's work programme and punctuation... it is both fascinating and terrifying what it is capable of", the leader added.

ChatGPT burst into the spotlight late last year, demonstrating an ability to generate essays, poems and conversations from the briefest of prompts.

The programme's wild success sparked a gold rush with billions of dollars of investment into the field, but critics and insiders have raised the alarm.

Common worries include the possibility that chatbots could flood the web with disinformation, that biased algorithms will churn out racist material, or that AI-powered automation could lay waste to entire industries.

The subject is on the agenda of a high-level meeting on trade between the United States and the European Union this Wednesday in Lulea, Sweden.

A group of industry chiefs and experts, including Sam Altman whose firm OpenAI created the ChatGPT bot, warned Tuesday about the potential threat of "extinction" posed by the technology.

The part of Frederiksen's speech drafted by ChatGPT included sentences like the following: "It has been an honour and a challenge to lead a broad government in the last parliamentary year."

"We have worked hard to co-operate across parties and ensure a strong and sustainable future for Denmark," and "We have taken steps to combat climate change and ensure a fairer and more inclusive society where all citizens have equal opportunities," ChatGPT also wrote.

"Although we have faced challenges and resistance along the way, I am proud of what we have achieved together in the last parliamentary year."

Frederiksen's regular speechwriters have yet to comment on the quality of the writing.

NDTV
 
I had already outlined many of these effects of AI before in this thread prior to AI technologies becoming as mainstream as they are today and I'm now glad to see other posters contribute to this thread once GPT and AI engines in general are becoming more mainstream.
 
A boy saw 17 doctors over 3 years for chronic pain. ChatGPT found the diagnosis
Alex experienced pain that stopped him from playing with other children but doctors had no answers to why. His frustrated mom asked ChatGPT for help.

During the COVID-19 lockdown, Courtney bought a bounce house for her two young children. Soon after, her son, Alex, then 4, began experiencing pain.

“(Our nanny) started telling me, ‘I have to give him Motrin every day, or he has these gigantic meltdowns,’” Courtney, who asked not to use her last name to protect her family’s privacy, tells TODAY.com. “If he had Motrin, he was totally fine."

Then Alex began chewing things, so Courtney took him to the dentist. What followed was a three-year search for the cause of Alex's increasing pain and eventually other symptoms.

The beginning of the end of the journey came earlier this year, when Courtney finally got some answers from an unlikely source, ChatGPT. The frustrated mom made an account and shared with the artificial intelligence platform everything she knew about her son's symptoms and all the information she could gather from his MRIs.

“We saw so many doctors. We ended up in the ER at one point. I kept pushing,” she says. “I really spent the night on the (computer) … going through all these things."

So, when ChatGPT suggested a diagnosis of tethered cord syndrome, "it made a lot of sense," she recalls.

When Alex began chewing on things, his parents wondered if his molars were coming in and causing pain. As it continued, they thought he had a cavity.

“Our sweet personality — for the most part — (child) is dissolving into this tantrum-ing crazy person that didn’t exist the rest of the time,” Courtney recalls.

The dentist “ruled everything out” but thought maybe Alex was grinding his teeth and believed an orthodontist specializing in airway obstruction could help. Airway obstructions impact a child’s sleep and could explain why he seemed so exhausted and moody, the dentist thought. The orthodontist found that Alex’s palate was too small for his mouth and teeth, which made it tougher for him to breathe at night. She placed an expander in Alex’s palate, and it seemed like things were improving.

“Everything was better for a little bit,” Courtney says. “We thought we were in the home stretch.”

But then she noticed Alex had stopped growing taller, so they visited the pediatrician, who thought the pandemic was negatively affecting his development. Courtney didn’t agree, but she still brought her son back in early 2021 for a checkup.

"He'd grown a little bit," she says.

The pediatrician then referred Alex to physical therapy because he seemed to have some imbalances between his left and right sides.

“He would lead with his right foot and just bring his left foot along for the ride,” Courtney says.

But before starting physical therapy, Alex had already been experiencing severe headaches that were only getting worse. He visited a neurologist, who said Alex had migraines. The boy also struggled with exhaustion, so he was taken to an ear, nose and throat doctor to see if he was having sleep problems due to his sinus cavities or airway.

No matter how many doctors the family saw, the specialists would only address their individual areas of expertise, Courtney says.

“Nobody’s willing to solve for the greater problem,” she adds. “Nobody will even give you a clue about what the diagnosis could be.”

Next, a physical therapist thought that Alex could have something called Chiari malformation, a congenital condition that causes abnormalities in the brain where the skull meets the spine, according to the American Association of Neurological Surgeons. Courtney began researching it, and they visited more doctors — a new pediatrician, a pediatric internist, an adult internist and a musculoskeletal doctor — but again reached a dead end.

In total, they visited 17 different doctors over three years. But Alex still had no diagnosis that explained all his symptoms. An exhausted and frustrated Courtney signed up for ChatGPT and began entering his medical information, hoping to find a diagnosis.

“I went line by line of everything that was in his (MRI notes) and plugged it into ChatGPT,” she says. “I put the note in there about ... how he wouldn’t sit crisscross applesauce. To me, that was a huge trigger (that) a structural thing could be wrong.”

She eventually found tethered cord syndrome and joined a Facebook group for families of children with it. Their stories sounded like Alex's. She scheduled an appointment with a new neurosurgeon and told her she suspected Alex had tethered cord syndrome. The doctor looked at his MRI images and knew exactly what was wrong with Alex.

“She said point blank, ‘Here’s occulta spina bifida, and here’s where the spine is tethered,” Courtney says.

Tethered cord syndrome occurs when the tissue in the spinal cord forms attachments that limit movement of the spinal cord, causing it to stretch abnormally, according to the American Association of Neurological Surgeons. The condition is closely associated with spina bifida, a birth defect where part of the spinal cord doesn’t develop fully and some of the spinal cord and nerves are exposed.

With tethered cord syndrome, “the spinal cord is stuck to something. It could be a tumor in the spinal canal. It could be a bump on a spike of bones. It could just be too much fat at the end of the spinal cord,” Dr. Holly Gilmer, a pediatric neurosurgeon at the Michigan Head & Spine Institute, who treated Alex, tells TODAY.com. "The abnormality can’t elongate ... and it pulls.”

In many children with spina bifida, there’s a visible opening in the child’s back. But the type Alex had is closed and considered “hidden,” also known as spina bifida occulta, according to the U.S. Centers for Disease Control and Prevention.

“My son doesn’t have a hole. There’s almost what looks like a birthmark on the top of his buttocks, but nobody saw it,” Courtney says. “He has a crooked belly button.”

Gilmer says doctors often find these conditions soon after birth, but in some cases, the marks — such as a dimple, a red spot or a tuft of hair — that indicate spina bifida occulta can be missed. Then doctors rely on symptoms to make the diagnosis, which can include dragging a leg, pain, loss of bladder control, constipation, scoliosis, foot or leg abnormalities and a delay in hitting milestones, such as sitting up and walking.

“In young children, it can be difficult to diagnose because they can’t speak,” Gilmer says, adding that many parents and children don't realize that their symptoms indicate a problem. "If this is how they have always been, they think that’s normal.”

When Courtney finally had a diagnosis for Alex, she experienced "every emotion in the book, relief, validated, excitement for his future."

ChatGPT and medicine
ChatGPT is a type of artificial intelligence program that responds based on input that a person enters into it, but it can't have a conversation or provide answers in the way that many people might expect.

That's because ChatGPT works by "predicting the next word" in a sentence or series of words based on existing text data on the internet, Andrew Beam, Ph.D., assistant professor of epidemiology at Harvard who studies machine learning models and medicine, tells TODAY.com. “Anytime you ask a question of ChatGPT, it’s recalling from memory things it has read before and trying to predict the piece of text.”

When using ChatGPT to make a diagnosis, a person might tell the program, "I have fever, chills and body aches,” and it fills in “influenza” as a possible diagnosis, Beam explains.

“It’s going to do its best to give you a piece of text that looks like a … passage that it’s read,” he adds.

There are both free and paid versions of ChatGPT, and the latter works much better than the free version, Beam says. But both seem to work better than the average symptom checker or Google as a diagnostic tool. “It’s a super high-powered medical search engine,” Beam says.

It can be especially beneficial for patients with complicated conditions who are struggling to get a diagnosis, Beam says.

These patients are "groping for information," he adds. "I do think ChatGPT can be a good partner in that diagnostic odyssey. It has read literally the entire internet. It may not have the same blind spots as the human physician has."

But it’s not likely to replace a clinician’s expertise anytime soon, he says. For example, ChatGPT fabricates information sometimes when it can't find the answer. Say you ask it for studies about influenza. The tool might respond with several titles that sound real, and the authors it lists may have even written about flu before — but the papers may not actually exist.

This phenomenon is called "hallucination," and "that gets really problematic when we start talking about medical applications because you don’t want it to just make things up," Beam says.

Dr. Jesse M. Ehrenfeld, president of leading U.S. physicians' group the American Medical Association, tells TODAY.com in a statement that the AMA "supports deployment of high-quality, clinically validated AI that is deployed in a responsible, ethical, and transparent manner with patient safety being the first and foremost concern. While AI products show tremendous promise in helping alleviate physician administrative burdens and may ultimately be successfully utilized in direct patient care, OpenAI’s ChatGPT and other generative AI products currently have known issues and are not error free."

He adds that "the current limitations create potential risks for physicians and patients and should be utilized with appropriate caution at this time. AI-generated fabrications, errors, or inaccuracies can harm patients, and physicians need to be acutely aware of these risks and added liability before they rely on unregulated machine-learning algorithms and tools."

“Just as we demand proof that new medicines and biologics are safe and effective, so must we insist on clinical evidence of the safety and efficacy of new AI-enabled healthcare applications," Ehrenfeld concludes.

Diagnosis and treatment
Alex is “happy go lucky” and loves playing with other children. He played baseball last year, but he quit because he was injured. Also, he had to give up hockey because wearing ice skates hurts his back and knees. He found a way to adapt, though.

“He’s so freaking intelligent,” Courtney says. “He’ll climb up on a structure, stand on a chair, and starts being the coach. So, he keeps himself in the game.”

After receiving the diagnosis, Alex underwent surgery to fix his tethered cord syndrome a few weeks ago and is still recovering.

“We detach the cord from where it is stuck at the bottom of the tailbone essentially,” Gilmer says. “That releases the tension.”

Courtney shared their story to help others facing similar struggles.

“There’s nobody that connects the dots for you,” she says. “You have to be your kid’s advocate.”
 
'Overwhelming consensus' on AI regulation - Musk
Tesla CEO Elon Musk says there was "overwhelming consensus" for regulation on artificial intelligence after tech heavyweights gathered in Washington to discuss AI.

Tech bosses attending the meeting included Meta's Mark Zuckerberg and Google boss Sundar Pichai.


Microsoft's former CEO Bill Gates and Microsoft's current CEO Satya Nadella were also in attendance.

The Wednesday meeting with US lawmakers was held behind closed doors.

The forum was convened by Senate Majority Leader Chuck Schumer and included the tech leaders as well as civil rights advocates.

The power of artificial intelligence - for both good and bad - has been the subject of keen interest from politicians around the world.

In May, Sam Altman, the CEO of OpenAI, the company behind ChatGPT, testified before a US Senate committee, describing the potential pitfalls of the new technology.

ChatGPT and other similar programmes can create incredibly human-like answers to questions - but can also be wildly inaccurate.

"I think if this technology goes wrong, it can go quite wrong...we want to be vocal about that," Mr Altman said. "We want to work with the government to prevent that from happening," he said.

There are fears that the technology could lead to mass layoffs, turbo charge fraud and make misinformation more convincing.
AI companies have also been criticised for training their models on data scraped from the internet without permission or payment to creators.

In April, Mr Musk told the BBC: "I think there should be a regulatory body established for overseeing AI to make sure that it does not present a danger to the public."

In Wednesday's meeting, he said he wanted a "referee" for artificial intelligence.

"I think we'll probably see something happen. I don't know on what timeframe or exactly how it will manifest itself," he told reporters after.

Mr Zuckerberg said that Congress "should engage with AI to support innovation and safeguards".

He added it was "better that the standard is set by American companies that can work with our government to shape these models on important issues".

 
Veteran Bollywood actor Anil Kapoor has won a landmark case preventing his image from being used in any manner by Artificial Intelligence (AI) technology.

The actor filed a case with the Delhi High Court, after numerous morphed videos and emojis featuring his iconic phrase ‘jhakas’ from the 1985 film ‘Yudh’ went viral on social media.

The suit had asked for protection for the actor’s personality rights including his name, image, likeness, voice against any misuse on social media, and listed various instances of how the actor’s attributes were being misused. After a detailed hearing, the court sided with Anil Kapoor by acknowledging his personality rights, and restrained users from misusing the attributes without his permission or consent.

Speaking to Variety, Anil Kapoor said he was happy with the order.

“I’m very happy with this court order, which has come in my favor, and I think it’s very progressive and great for not only me but for other actors also. Because of the way technology and the AI technology, which is which is evolving every day [and] which can completely take advantage of and be misused commercially, as well as where my image, voice, morphing, GIFs and deep fakes are concerned, I can straight away, if that happens, send a court order and injunction and they have to pull it down.”

“It’s not only for me,” the ‘Slumdog Millionaire’ actor stressed. “Today, I’m there to protect myself, but when I’m not there, the family should have the right to protect my [personality] and gain from it in future.”

“My intention is not to interfere with anyone’s freedom of expression or to penalize anyone. My intent was to seek protection of my personality rights and prevent any misuse for commercial gains, particularly in the current scenario with rapid changes in technology and tools like artificial intelligence.”

AI is a central element to the SAG-AFTRA strikes in Hollywood. Anil Kapoor expressed solidarity with the ongoing strike.

“This [the court order] should be great positive news for all of them to a certain extent. And I am always, completely with them in every which way, and I feel their rights should be protected, because everybody, big, small, popular, not popular, every actor has the right to protect themselves and their rights.”

Source: The current
 
ChatGPT's OpenAI Sacks CEO Sam Altman

Altman's shock departure "follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities," a statement said.

OpenAI, the company that created ChatGPT a year ago, said Friday it had dismissed CEO Sam Altman as it no longer had confidence in his ability to lead the Microsoft-backed firm.

Altman, 38, became a tech world sensation with the release of ChatGPT, an artificial intelligence chatbot with unprecedented capabilities, churning out human-level content like poems or artwork in just seconds.

OpenAI's board said in a statement that Altman's departure "follows a deliberative review process," which concluded "he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities."



 
Veteran Bollywood actor Anil Kapoor has won a landmark case preventing his image from being used in any manner by Artificial Intelligence (AI) technology.

The actor filed a case with the Delhi High Court, after numerous morphed videos and emojis featuring his iconic phrase ‘jhakas’ from the 1985 film ‘Yudh’ went viral on social media.

The suit had asked for protection for the actor’s personality rights including his name, image, likeness, voice against any misuse on social media, and listed various instances of how the actor’s attributes were being misused. After a detailed hearing, the court sided with Anil Kapoor by acknowledging his personality rights, and restrained users from misusing the attributes without his permission or consent.

Speaking to Variety, Anil Kapoor said he was happy with the order.

“I’m very happy with this court order, which has come in my favor, and I think it’s very progressive and great for not only me but for other actors also. Because of the way technology and the AI technology, which is which is evolving every day [and] which can completely take advantage of and be misused commercially, as well as where my image, voice, morphing, GIFs and deep fakes are concerned, I can straight away, if that happens, send a court order and injunction and they have to pull it down.”

“It’s not only for me,” the ‘Slumdog Millionaire’ actor stressed. “Today, I’m there to protect myself, but when I’m not there, the family should have the right to protect my [personality] and gain from it in future.”

“My intention is not to interfere with anyone’s freedom of expression or to penalize anyone. My intent was to seek protection of my personality rights and prevent any misuse for commercial gains, particularly in the current scenario with rapid changes in technology and tools like artificial intelligence.”

AI is a central element to the SAG-AFTRA strikes in Hollywood. Anil Kapoor expressed solidarity with the ongoing strike.

“This [the court order] should be great positive news for all of them to a certain extent. And I am always, completely with them in every which way, and I feel their rights should be protected, because everybody, big, small, popular, not popular, every actor has the right to protect themselves and their rights.”

Source: The current

This is one drawback of AI technology.

Fake videos and images can be used to harass/blackmail people. There need to be strict laws to prevent these.
 
This is one drawback of AI technology.

Fake videos and images can be used to harass/blackmail people. There need to be strict laws to prevent these.
There is no limit of this technology.

Could be utilized in even more worst manner for cyber crimes.

In simple words AI is the destruction, It will cause more harm than providing benefits.
 
But there is no stopping it, it is something that will add value to a'lot of things, like AI being applied to cars, or being used for education, it's surely has a'lot of cons also but AI itself can be applied in detecting fraud, overall yes, it surely needs to be regulated.

Also, a very interesting device Humane which i thought was just some futuristic concept was actually launched in US a couple days ago.
 
was working on something for work, and a task that we would normally pay an external consultant for, we managed to get done with AI in about 2 hours of trial and error. its a game changer if you know how to use it, especially for iterative processes where u have to pay someone by the hour for minor amendments.
 
OpenAI staff demand board resign over Sam Altman sacking

Staff at OpenAI have called on the board of the artificial intelligence company to resign after the shock dismissal of former boss Sam Altman.

In a letter, they question the board's competence, and accuse it of undermining the firm's work.

They also demand Mr Altman's reinstatement.

But Mr Altman now has a job at Microsoft and seems to want to stay. He and Microsoft boss Satya Nadella see OpenAI's success as vital, he added.

"satya and my top priority remains to ensure openai continues to thrive," he tweeted.

"we are committed to fully providing continuity of operations to our partners and customers. the openai/microsoft partnership makes this very doable."

The sacking on Friday of a man who is one of the leading figures in artificial intelligence (AI) shocked the tech world.

The letter's hundreds of signatories, who include senior staff, say they may themselves resign if their demands are not met.

They also state that Microsoft has assured them that there are jobs for all OpenAI staff if they want to join the company.

Aliisa Rosenthal, head of sales at OpenAI, posted on X - formerly Twitter - that 735 of the company's 770 workers had put their names to the letter.

"We spent our night calling team members, waking them up and asking them to sign. Everyone did so instantly. I've never seen so much clarity and unity in one group of people," she said.

One of the notable people to sign the letter is OpenAI's chief scientist, Ilya Sutskever - despite being a member of the board which now finds itself under fire.

Writing on X, he said that he had made a mistake.

"Now I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company", he posted.


 
OpenAI staff demand board resign over Sam Altman sacking

Staff at OpenAI have called on the board of the artificial intelligence company to resign after the shock dismissal of former boss Sam Altman.

In a letter, they question the board's competence, and accuse it of undermining the firm's work.

They also demand Mr Altman's reinstatement.

But Mr Altman now has a job at Microsoft and seems to want to stay. He and Microsoft boss Satya Nadella see OpenAI's success as vital, he added.

"satya and my top priority remains to ensure openai continues to thrive," he tweeted.

"we are committed to fully providing continuity of operations to our partners and customers. the openai/microsoft partnership makes this very doable."

The sacking on Friday of a man who is one of the leading figures in artificial intelligence (AI) shocked the tech world.

The letter's hundreds of signatories, who include senior staff, say they may themselves resign if their demands are not met.

They also state that Microsoft has assured them that there are jobs for all OpenAI staff if they want to join the company.

Aliisa Rosenthal, head of sales at OpenAI, posted on X - formerly Twitter - that 735 of the company's 770 workers had put their names to the letter.

"We spent our night calling team members, waking them up and asking them to sign. Everyone did so instantly. I've never seen so much clarity and unity in one group of people," she said.

One of the notable people to sign the letter is OpenAI's chief scientist, Ilya Sutskever - despite being a member of the board which now finds itself under fire.

Writing on X, he said that he had made a mistake.

"Now I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company", he posted.



Goodness! What a drama this has been!
The OpenAI board members all seem like holier than thou, clueless, woke academics who should not be within a mile of any corporate board. They have destroyed the company, and how.

Altman has the last word. He has been hired by Microsoft and is leaving with his team, leaving OpenAI high, dry and potentially bankrupt.
 
OpenAI introduces free voice command feature on ChatGPT mobile app

In a groundbreaking move, OpenAI has announced the integration of free voice capabilities for all users of ChatGPT, marking a significant shift in the landscape of AI interaction.

Previously exclusive to premium subscribers, this update democratizes access to voice interaction, offering a more natural and intuitive way for users to engage with the AI.

With the latest update, users can now seamlessly interact with ChatGPT using voice commands by simply downloading the app and tapping a headphones icon.

This move is expected to significantly increase user interaction, providing a broader audience, including those who prefer voice commands over typing, with effortless access to AI technology.


 

ChatGPT Maker OpenAI Brings Back Sam Altman as CEO​


Chat GPT maker OpenAI said Sam Altman will return as its CEO, just days after the startup's board dismissed him and he was hired by Microsoft.

Along with Altman's reinstatement, OpenAI's board is being reformed, the startup said in an X post Wednesday. Bret Taylor, former co-CEO at Salesforce, will become chair of the new board, and former U.S. Secretary of the Treasury Larry Summers will also join. Quora co-founder and CEO Adam D’Angelo, who was on OpenAI's board previously, will remain.

Microsoft CEO Satya Nadella welcomed the decision, saying in an X post that this was a "first essential step on a path to more stable, well-informed, and effective governance" at OpenAI.2 Microsoft is OpenAI's largest shareholder, with a stake of roughly 49%, and has invested billions into OpenAI amid a boom in demand for AI tools.

The move to bring Altman back to OpenAI comes after hundreds of OpenAI employees threatened to quit their jobs, demanding Altman's reinstatement and the resignation of the board that dismissed him. Microsoft also offered to hire and match compensation for any resigning OpenAI members.

Microsoft shares were up over 1% in early trading Wednesday following the news. They had dropped at Altman's ouster and then surged to a record high after he was hired.

Source: Investopedia
 
I, for one welcome our AI overlords.
Jokes aside, I have been working in AI for past couple of years now. It is a game changer.
But knowing humans, it will cause much more harm and destruction at scale, as compared to any benefits.

We are in for a very bleak future post 2070s-2080s. Well, Hopefully I will be dead by then,
 
According to Media sources:

With a tap any Android or Apple phone user will be able to talk to ChatGPT.

OpenAI, the company behind the hugely popular AI chatbot, introduced voice chats on ChatGPT for Android and iOS in September, but initially only for its Plus and Enterprise subscribers. Now, co-founder Greg Brockman has revealed the feature is rolling out to all free users on mobile.

How will ChatGPT voice chat work on the mobile app?

Now, with ChatGPT’s voice chat right in your mobile app, you can dive right into dynamic, spoken dialogues with the chatbot just by hitting the little voice symbol. Your spoken words will be interpreted by the large language model, and you’ll receive a response that eerily mirrors human speech, taking your conversation to a whole new realm of authenticity.

Powered by a text-to-speech model, it generates “human-like audio” and offers five different voices. Despite the wide release announcement, universal access may take time. It’s still unclear if users need to opt in, but paid subscribers can enable it in Settings.

OpenAI claim ChatGPT’s voice chat not only enhances user experience but also caters to a variety of preferences, offering a rich, multimodal experience that goes hand-in-hand with classic text-based interactions. It has the potential to offer a whole new level of engaging and accessible communication.
 
OpenAI releases guidelines to gauge AI risks

NEW YORK: ChatGPT-maker OpenAI published on Monday its newest guidelines for gauging “catastrophic risks” from artificial intelligence in models currently being developed.

The announcement comes one month after the company’s board fired CEO Sam Altman, only to hire him back a few days later when staff and investors rebelled.

According to US media, board members had criticized Altman for favoring the accelerated development of OpenAI, even if it meant sidestepping certain questions about its tech’s possible risks.

In a “PreparednessFramework” published on Monday, the company states: “We believe the scientific study of catastrophic risks from AI has fallen far short of where we need to be.” The framework, it reads, should “help address this gap.” A monitoring and evaluations team announced in October will focus on “frontier models” currently being developed that have capabilities superior to the most advanced AI software.

The team will assess each new model and assign it a level of risk, from “low” to “critical,” in four main categories.

Only models with a risk score of “medium” or below can be deployed, according to the framework.

The first category concerns cybersecurity and the model’s ability to carry out large-scale cyberattacks.

The second will measure the software’s propensity to help create a chemical mixture, an organism (such as a virus) or a nuclear weapon, all of which could be harmful to humans.

The third category concerns the persuasive power of the model, such as the extent to which it can influence human behavior. The last category of risk concerns the potential autonomy of the model, in particular whether it can escape the control of the programmers who created it.
 
Very impressive, the fact that now you can build your own custom GPT is very cool, compared to having 3.5 at the start of the year. Have heard that 4.5 will be released by end of year.
 
AI can’t be patent inventor, UK court rules

LONDON: A US computer scientist lost his bid on Wednesday to register patents over inventions created by his artificial intelligence system in a landmark case in Britain about whether AI can own patent rights.

Stephen Thaler wanted to be granted two patents in the UK for inventions he says were devised by his “creativity machine” called DABUS. His attempt to register the patents was refused by the UK’s Intellectual Property Office (IPO) on the grounds that the inventor must be a human or a company, rather than a machine.

Thaler approached the UK’s Supreme Court, which rejected his appeal as under UK patent law “an inventor must be a natural person”.

Judge David Kitchin said in the court’s written ruling that the case was “not concerned with the broader question whether technical advances generated by machines acting autonomously and powered by AI should be patentable”.

Thaler’s lawyers said the ruling “establishes that UK patent law is currently wholly unsuitable for protecting inventions generated autonomously by AI machines and as a consequence wholly inadequate in supporting any industry that relies on AI in the development of new technologies”.
 
US news organisation the New York Times is suing ChatGPT-owner OpenAI over claims its copyright was infringed to train the system.

The lawsuit, which also names Microsoft as a defendant, says the firms should be held responsible for "billions of dollars" in damages.

ChatGPT and other large language models (LLMs) "learn" by analysing a massive amount of data often sourced online.

The BBC has approached OpenAI and Microsoft for comment.

The lawsuit claims "millions" of articles published by the New York Times were used without its permission to make ChatGPT smarter, and claims the tool is now competing with the newspaper as a trustworthy information source.

It alleges that when asked about current events, ChatGPT will sometimes generate "verbatim excerpts" from New York Times articles, which cannot be accessed without paying for a subscription.

According to the lawsuit, this means readers can get New York Times content without paying for it - meaning it is losing out on subscription revenue as well as advertising clicks from people visiting the website.

It also gave the example of the Bing search engine - which has some features powered by ChatGPT - producing results taken from a New York Times-owned website, without linking to the article or including referral links it uses to generate income.

Microsoft has invested more than $10 billion (£7.8 billion) in OpenAI.

The lawsuit, filed on Wednesday in a Manhattan federal court, reveals the New York Times unsuccessfully approached Microsoft and OpenAI in April to seek "an amicable resolution" over its copyright.

Multiple lawsuits
It comes a month after a period of chaos at OpenAI where co-founder and CEO Sam Altman was sacked - and then rehired - over the course of a few days.

His sacking shocked industry insiders and led to staff threatening mass resignations unless he was reinstated.

But as well as the internal issues, the firm is now facing multiple lawsuits filed in 2023.

In September a similar copyright infringement case was brought by a group of US authors including Game of Thrones novelist George RR Martin and John Grisham.

That followed legal action brought by comedian Sarah Silverman in July, as well as an open letter signed by authors Margaret Atwood and Philip Pullman that same month calling for AI companies to compensate them for using their work.

And OpenAI is also facing a lawsuit alongside Microsoft - and programming site GitHub - from a group of computing experts who argue their code was used without their permission to train an AI called Copilot.

As well as these actions, there have been many cases brought against developers of so-called generative AI - that is, artificial intelligence that can create media based on text prompts - with artists suing text-to-image generators Stability AI and Midjourney in January, claiming they only function by being trained on copyrighted artwork.

None of these lawsuits have yet been resolved.

Source: BBC

 
One of the things I want to do in 2024 is to embrace and integrate AI in my daily activities.

I haven't used AI much so far.
 
One of the things I want to do in 2024 is to embrace and integrate AI in my daily activities.

I haven't used AI much so far.
its really useful for doing groundwork and making templates to amend, whilst its not clever enough to make fool proof documents or analysis yet it takes a lot of effort out of trial and error type tasks
 
The New York Times (NYT.N) sued OpenAI and Microsoft (MSFT.O) on Wednesday, accusing them of using millions of the newspaper's articles without permission to help train chatbots to provide information to readers.

The Times said it is the first major U.S. media organization to sue OpenAI, creator of the popular artificial-intelligence platform ChatGPT, and Microsoft, an OpenAI investor and creator of the AI platform now known as Copilot, over copyright issues associated with its works.



Reuters
 
OpenAI To Launch Anti-disinformation Tools For 2024 Elections

ChatGPT maker OpenAI has said it will introduce tools to combat disinformation ahead of the dozens of elections this year in countries that are home to half the world's population.

The explosive success of text generator ChatGPT spurred a global artificial intelligence revolution but also triggered warnings that such tools could flood the internet with disinformation and sway voters.

With elections due this year in countries including the United States, India and Britain, OpenAI said Monday it will not allow its tech -- including ChatGPT and the image generator DALL-E 3 -- to be used for political campaigns.

"We want to make sure our technology is not used in a way that could undermine" the democratic process, OpenAI said in a blog post.

"We're still working to understand how effective our tools might be for personalized persuasion," it added.

"Until we know more, we don't allow people to build applications for political campaigning and lobbying."

AI-driven disinformation and misinformation are the biggest short-term global risks and could undermine newly elected governments in major economies, the World Economic Forum warned in a report released last week.

Fears over election disinformation began years ago, but the public availability of potent AI text and image generators has boosted the threat, experts say, especially if users cannot easily tell if the content they see is fake or manipulated.

OpenAI said Monday it was working on tools that would attach reliable attribution to text generated by ChatGPT, and also give users the ability to detect if an image was created using DALL-E 3.

"Early this year, we will implement the Coalition for Content Provenance and Authenticity's digital credentials -- an approach that encodes details about the content's provenance using cryptography," the company said.

The coalition, also known as C2PA, aims to improve methods for identifying and tracing digital content. Its members include Microsoft, Sony, Adobe and Japanese imaging firms Nikon and Canon.

OpenAI said ChatGPT, when asked procedural questions about US elections such as where to vote, will direct users to authoritative websites.

"Lessons from this work will inform our approach in other countries and regions," the company said.

It added that DALL-E 3 has "guardrails" that prevent users from generating images of real people, including candidates.

OpenAI's announcement follows steps revealed last year by US tech giants Google and Facebook parent Meta to limit election interference, especially through the use of AI.

AFP has previously debunked deepfakes -- doctored videos -- of US President Joe Biden announcing a military draft and former secretary of state Hillary Clinton endorsing Florida Governor Ron DeSantis for president.

Doctored footage and audio of politicians were circulated on social media ahead of the presidential election this month in Taiwan, AFP Fact Check found.

While much of this content is low-quality and it is not immediately clear if it is created with AI apps, experts say disinformation is fuelling a crisis of trust in political institutions.

 
Europe Within Reach of Landmark AI Rules After Nod From EU Countries

Europe on Friday moved a step closer to adopting rules governing the use of artificial intelligence and AI models such as Microsoft-backed ChatGPT after EU countries endorsed a political deal reached in December.

The rules, proposed by the European Commission three years ago, aim to set a global standard for a technology used in a vast swathe of industries from banking and retail to the car and airline sectors.

They also set parameters for the use of AI for military, crime and security purposes.

EU industry chief Thierry Breton said the Artificial Intelligence (AI) Act is historical and a world first.

"Today member states endorsed the political agreement reached in December, recognising the perfect balance found by the negotiators between innovation and safety," he said in a statement.

A major concern of experts is that generative AI has boosted deepfakes - realistic yet fabricated videos created by AI algorithms trained on copious online footage - which surface on social media, blurring fact and fiction in public life.

EU digital chief Margrethe Vestager said the spread of fake sexually explicit images of pop singer Taylor Swift on social media in recent days underscored the need for the new rules.

"What happened to @taylorswift13 tells it all: the #harm that #AI can trigger if badly used, the responsibility of #platforms, & why it is so important to enforce #tech regulation," she said on X social platform.

Friday's agreement was a foregone conclusion after France, the last holdout, dropped its opposition to the AI Act after securing strict conditions that balance transparency versus business secrets and reduce the administrative burden on high risk AI systems.

The aim is to allow competitive AI models to develop in the bloc, an EU diplomatic official told Reuters earlier on Friday. The official declined to be named because they were not authorised to comment publicly on the issue.

French AI start-up Mistral, founded by former Meta and Google AI researchers, and Germany's Aleph Alpha have been lobbying their respective governments on the issue, sources said.

Germany earlier this week also backed the rules.

Tech lobbying group CCIA which counts Alphabet's Google, Amazon, Apple and Meta Platforms as members, warned of roadblocks ahead.

"Many of the new AI rules remain unclear and could slow down the development and roll-out of innovative AI applications in Europe," CCIA Europe's Senior Policy Manager Boniface de Champris said.

"The Act's proper implementation will therefore be crucial to ensuring that AI rules do not overburden companies in their quest to innovate and compete in a thriving, highly dynamic market."

The next step for the AI Act to become legislation is a vote by a key committee of EU lawmakers on Feb. 13 and the European Parliament vote either in March or April. It will likely enter into force before the summer and should apply in 2026 although parts of the legislation will kick in earlier.

SOURCE: https://www.reuters.com/technology/...step for the,legislation will kick in earlier.
 
OpenAI valued at $80 billion after deal, NYT reports

Microsoft-backed (MSFT.O), opens new tab OpenAI has completed a deal that values the artificial intelligence company at $80 billion or more, the New York Times reported on Friday, citing people with knowledge of the deal.

The company would sell existing shares in a so-called tender offer led by venture firm Thrive Capital, the report, opens new tab said.

Under the deal, employees will be able to cash out their shares of the company rather than a traditional funding round which would raise money for the business, the report added.


 
Exactly.

There are already deep fake issues.

People can frame one another using advanced deep fakes. Concerning signs.
AI is already creating many problems for humans. Stuff like deep fakes and fabricated audio have been mind-blowing yet dangerous as well.
 
Google releases ‘open’ AI models after Meta

Google on Wednesday released new artificial intelligence (AI) models that outside developers potentially can fashion as their own, following a similar move by Meta Platforms and others.

The Alphabet subsidiary said individuals and businesses can build AI software based on its new family of “open models” called Gemma, for free. The company is making key technical data such as what are called model weights publicly available, it said.

The move may attract software engineers to build on Google’s technology and encourage usage of its newly profitable cloud division. The models are “optimized” for Google Cloud, where first-time cloud customers using them get $300 in credits, the company said.

Google stopped short of making Gemma fully “open source,” meaning the company still may have a hand in setting terms of use and ownership. Some experts have said open-source AI was ripe for abuse, while others have championed the approach for widening the set of people who can contribute to and benefit from the technology.

With the announcement, Google did not make its bigger, premier models known as Gemini open, unlike Gemma. It said the Gemma models are sized at two billion or seven billion parameters — or the number of different values that an algorithm takes into account to generate output.

Meta’s Llama 2 models range from seven to 70 billion parameters in size. Google has not disclosed the size of its largest Gemini models. For comparison, OpenAI’s GPT-3 model announced in 2020 had 175 billion parameters.

Chipmaker Nvidia on Wednesday said it has worked with Google to ensure Gemma models run smoothly on its chips. Nvidia also said it will soon make chatbot software, which it is developing to run AI models on Windows PCs, work with Gemma.

SOURCE: https://www.reuters.com/technology/google-releases-open-ai-models-after-meta-2024-02-21/
 
Nvidia: US tech giant unveils latest artificial intelligence chip

Nvidia has unveiled its latest artificial intelligence (AI) chip which it says can do some tasks 30 times faster than its predecessor.

The firm has an 80% market share and hopes to cement its dominance.

In addition to the B200 "Blackwell" chip, its chief executive Jensen Huang detailed a new set of software tools at its annual developer conference.

Nvidia is the third-most valuable company in the US, behind only Microsoft and Apple.

Its shares have surged 240% over the past year and its market value touched $2tn (£1.57tn) last month.

As Mr Huang kicked off the conference, he jokingly said, "I hope you realise this is not a concert."

But Bob O'Donnell from Technalysis Research who was at the event told the BBC that "the buzz was in the air".

"I haven't seen something like this in the tech industry in quite some time," he said.

"In fact, some people were making analogies to the early days of Steve Jobs types of presentations."

Nvidia said major customers including Amazon, Google, Microsoft and OpenAI are expected to use the firm's new flagship chip in cloud-computing services and for their own AI offerings.

It also said the new software tools, called microservices, improve system efficiency to make it easier for a business to incorporate an AI model into its work.

Other announcements include a new line of chips for cars which can run chatbots inside the vehicle. The company said Chinese electric vehicle makers BYD and Xpeng would both use its new chips.

Mr Huang also outlined a new series of chips for creating humanoid robots, inviting several of the robots to join him on the stage.

Founded in 1993, Nvidia was originally known for making the type of computer chips that process graphics, particularly for computer games.

Long before the AI revolution, it started adding features to its chips that it says help machine learning, investments that have helped it gain market share.

It is now seen as a key company to watch to see how fast AI-powered tech is spreading across the business world.

But competition is heating up from rivals such as AMD and Intel.

Mr O'Donnell said the market was growing so quickly that "even if Nvidia loses some share, they can still grow their overall business because there's just a lot of opportunities for everybody".

BBC
 
I have to say it’s moving too fast now, making it hard to keep up with technology.
 
I have to say it’s moving too fast now, making it hard to keep up with technology.

I used it for the first time for my resume and was impressed.

I am against it in any creative field though and I’d be worried about it in any other
 
I used it for the first time for my resume and was impressed.

I am against it in any creative field though and I’d be worried about it in any other
Its too late, cat’s our of the bag. Ai and all models are being heavily used in US.
Doesn’t help that meta is also releasing its LLMs to everyone, literally every product company I know is working on it now, even the chip manufacturing is heavily tuned towards AI , very interesting times if one is young and and ready to learn, unfortunately with the inflation and job market it’s slightly worrying.
 
Its too late, cat’s our of the bag. Ai and all models are being heavily used in US.
Doesn’t help that meta is also releasing its LLMs to everyone, literally every product company I know is working on it now, even the chip manufacturing is heavily tuned towards AI , very interesting times if one is young and and ready to learn, unfortunately with the inflation and job market it’s slightly worrying.

Just like Dusty Rhodes predicted in 1985:

“And hard times are when a man has worked at a job for thirty years, thirty years, and they give him a watch, kick him in the butt and say “hey a computer took your place, daddy”, that’s hard times!”
 
Just like Dusty Rhodes predicted in 1985:

“And hard times are when a man has worked at a job for thirty years, thirty years, and they give him a watch, kick him in the butt and say “hey a computer took your place, daddy”, that’s hard times!”
Yeah I’m certain that’s where it’s heading, but you know blue collars always had to deal with this, only first time are white collars dealing with this.
 
Its too late, cat’s our of the bag. Ai and all models are being heavily used in US.
Doesn’t help that meta is also releasing its LLMs to everyone, literally every product company I know is working on it now, even the chip manufacturing is heavily tuned towards AI , very interesting times if one is young and and ready to learn, unfortunately with the inflation and job market it’s slightly worrying.
Rightly put. There is a race to achieve AGI asap. Whoever achieves this first will have the world at their feet. Once AGI is achieved, ASI is only a matter of time.
As you have mentioned, no job is safe. With Robotics, many manual jobs are also under severe threat. In 5 years, the world will look very different.

I sincerely hope US achieves AGI before China. Some say that AGI has already been achieved by Open AI, but they will not announce it until the elections are over.

I sometimes think about what over populated south Asian countries do when robotics reach to the level where many manual jobs get automated. Outsourcing will reduce severely and millions will get affected by this. Short term looks bleak and long term looks scary. I hope India will be ready for the coming changes.
 
Rightly put. There is a race to achieve AGI asap. Whoever achieves this first will have the world at their feet. Once AGI is achieved, ASI is only a matter of time.
As you have mentioned, no job is safe. With Robotics, many manual jobs are also under severe threat. In 5 years, the world will look very different.

I sincerely hope US achieves AGI before China. Some say that AGI has already been achieved by Open AI, but they will not announce it until the elections are over.

I sometimes think about what over populated south Asian countries do when robotics reach to the level where many manual jobs get automated. Outsourcing will reduce severely and millions will get affected by this. Short term looks bleak and long term looks scary. I hope India will be ready for the coming changes.
Yeah South Asia will suffer due to outsourcing reduction but at the same time no way can robotic manufacturing keep up. If South Asia can res themselves then great.

TbH even first world countries will suffer with heavy automation no one even is ready..

In my working lifetime I haven’t seen such bad salaries inflation ratio, every week so many layoffs in US as well, so many jobs being shifted to India for now or jobs being cut.

OpenAi is moving too fast that its scary although having said all that ,economy still runs through payment and productivity and that’s only possible due to humans.
 
Most of us are stuck in a rat race. AGI/ ASI should free us from it. The most important thing for nations to strive for is Universal Basic Income. Exiting times. Hopefully AGI is achieved in my lifetime.
 
Most of us are stuck in a rat race. AGI/ ASI should free us from it. The most important thing for nations to strive for is Universal Basic Income. Exiting times. Hopefully AGI is achieved in my lifetime.
Rat race might be finished but the haves and have not difference will increase
 
Rat race might be finished but the haves and have not difference will increase
Once AGi/ASI is achieved nothing else matters tbh. The moment AI becomes smarter than all humans. The next second it becomes exponentially smarter than before. Most of the diseases would have cure. Most mundane work would be done by robots. Traffic would be solved problem. Maybe except for deep space exploration everything else would be solved. Atleast thats the theory. Wealth brings you freedom. AGI/ASI does the same without wealth.
 
Once AGi/ASI is achieved nothing else matters tbh. The moment AI becomes smarter than all humans. The next second it becomes exponentially smarter than before. Most of the diseases would have cure. Most mundane work would be done by robots. Traffic would be solved problem. Maybe except for deep space exploration everything else would be solved. Atleast thats the theory. Wealth brings you freedom. AGI/ASI does the same without wealth.
I don't know if and when will that happen. When AI becomes "starter than humans" but if and when that happens really. I'd argue AI would find humans nothing more than a sub species to their own.
 
Yeah South Asia will suffer due to outsourcing reduction but at the same time no way can robotic manufacturing keep up. If South Asia can res themselves then great.

TbH even first world countries will suffer with heavy automation no one even is ready..

In my working lifetime I haven’t seen such bad salaries inflation ratio, every week so many layoffs in US as well, so many jobs being shifted to India for now or jobs being cut.

OpenAi is moving too fast that its scary although having said all that ,economy still runs through payment and productivity and that’s only possible due to humans.
There's ferocious debate raging about this on economic forums, blogs, papers, Twitter etc.

My personal feeling is that it's overblow.

- When we started applying modern techniques to agriculture, there must've been worry about what all the farmers would do? They moved to artisanship and now less than 5% of folks depend on agriculture in most of the west
- when we started mechanizing - textile mills, printing presses etc., there was worry about where all the artisans & craftsmen would go, they went into industry
- when we started automating industry, they went into service. Less than 20% of US workforce is now in industry
- now we're starting to automating some services. We'll have to figure out what's next. I'm pretty confident we will

There are going to be some fundamental changes though and new winners and losers.

India's decently placed I think. Yes we have a huge population to feed but a lot of the skills needed will involve being able to monitor, make sense of and effectively use AI which need language, technology skills, willingness to adapt etc. which we have a decent proportion of compared to say China.
 
There is a saying that when there is a gold rush sell shovels. Nvidia is doing that exactly. If you bought 400k$ worth of nvidia stocks 12 years back you would be having 100 million $ today 😭
 
Will a lot of IT jobs disappear due to AI?

Which IT sectors should be affected the most? Which IT sectors should be okay?
 
Will a lot of IT jobs disappear due to AI?

Which IT sectors should be affected the most? Which IT sectors should be okay?
Yes but more should be created as well.
Anything that can be automated will be automated.

The only big threat right now is Devin AI, everything else enhances IT but Devin replaces coders.

I do see more and more Business bypassing coders and application developers to make their own business products within IT.. but theh still need to tell what to do..
Maybe MBA will make a come back..
 
Yes but more should be created as well.
Anything that can be automated will be automated.

The only big threat right now is Devin AI, everything else enhances IT but Devin replaces coders.

I do see more and more Business bypassing coders and application developers to make their own business products within IT.. but theh still need to tell what to do..
Maybe MBA will make a come back..

Is data science field going to be impacted by AI (i.e., job loss)?
 
Is data science field going to be impacted by AI (i.e., job loss)?
It's so tough to predict based on the state of technology today.

I have asked one of teams to try it as an experiment i.e. use ML and Gen AI as a combination to try and get some intelligible analysis from a dataset we use without directing it too much.

What they've come up with so far hasn't impressed me too much to be honest. I approved a small budget to consult with EY and get some help as well so let's see.

I think the difficult part is to predict where it will go and how much it'll be able to do. Everyone has different opinions. We have to be cognizant that the tools are still in their infancy so who knows how they'll grow.

To answer your question, yes the field will be impacted. A lot of the low level data cleanup, prep etc. should be taken over. The ability to ask the right questions. That's a skill I see as still important and difficult to replace.
 
Is data science field going to be impacted by AI (i.e., job loss)?
Probably as Ai can give a summary of an email my assumption is they can assist in giving summary of a data.

But it should be enhancing..
 
It's so tough to predict based on the state of technology today.

I have asked one of teams to try it as an experiment i.e. use ML and Gen AI as a combination to try and get some intelligible analysis from a dataset we use without directing it too much.

What they've come up with so far hasn't impressed me too much to be honest. I approved a small budget to consult with EY and get some help as well so let's see.

I think the difficult part is to predict where it will go and how much it'll be able to do. Everyone has different opinions. We have to be cognizant that the tools are still in their infancy so who knows how they'll grow.

To answer your question, yes the field will be impacted. A lot of the low level data cleanup, prep etc. should be taken over. The ability to ask the right questions. That's a skill I see as still important and difficult to replace.
One company aims to give its customer sales rep assistance via ai as to if they are making the right order in the supply chain thereby making them alerted based on photo etc.

The above is a very low level pilot program but it will be used to enhance AI.

Devin AI is the only direct one targeting coders.
 
One company aims to give its customer sales rep assistance via ai as to if they are making the right order in the supply chain thereby making them alerted based on photo etc.

The above is a very low level pilot program but it will be used to enhance AI.

Devin AI is the only direct one targeting coders.
I attended a fancy industry conference held by KPMG. Amid all the wining and dining, they showed us a bunch of interesting use cases. Most of them didn't impress me too much but as I said, it's early days. Early cellphones with internet didn't impress me too much either till Apple figured out the killer apps. Someone needs to figure that out here.
 
I attended a fancy industry conference held by KPMG. Amid all the wining and dining, they showed us a bunch of interesting use cases. Most of them didn't impress me too much but as I said, it's early days. Early cellphones with internet didn't impress me too much either till Apple figured out the killer apps. Someone needs to figure that out here.
You are thinking as a consumer, think it like an Enterprise Software. Definitely all the shiny bling bling stuff will happen in consumer space but the tough part is Enterprise software, there is a reason why Google is never able to crack that nut.

Enterprise software is based on company’s needs and usually is always boring but takes of consideration of security integration with other organizational tools.

Only copilot right now is an Enterprise level AI , others are far off.
 
UN General Assembly to address AI's potential risks, rewards

The UN General Assembly will turn its attention to artificial intelligence on Thursday, weighing a resolution that lays out the potentially transformational technology's pros and cons while calling for the establishment of international standards.

The text, co-sponsored by dozens of countries, emphasizes the necessity of guidelines "to promote safe, secure and trustworthy artificial intelligence systems," while excluding military AI from its purview.

On the whole, the resolution focuses more on the technology's positive potential, and calls for special care "to bridge the artificial intelligence and other digital divides between and within countries."

The draft resolution, which is the first on the issue, was brought forth by the United States and will be submitted for approval by the assembly on Thursday.

It also seeks "to promote, not hinder, digital transformation and equitable access" to AI in order to achieve the UN's Sustainable Development Goals, which aim to ensure a better future for humanity by 2030.

"As AI technologies rapidly develop, there is urgent need and unique opportunities for member states to meet this critical moment with collective action," US Ambassador to the UN Linda Thomas-Greenfield said, reading a joint statement by the dozens of co-sponsor countries.

According to Richard Gowan, an analyst at the International Crisis Group, "the emphasis on development is a deliberate effort by the US to win goodwill among poorer nations."

"It is easier to talk about how AI can help developing countries progress rather than tackle security and safety topics head-on as a first initiative," he said.

- 'Male-dominated algorithms' -

The draft text does highlight the technology's threats when misused with the intent to cause harm, and also recognizes that without guarantees, AI risks eroding human rights, reinforcing prejudices and endangering personal data protection.

It therefore asks member states and stakeholders "to refrain from or cease the use of artificial intelligence systems that are impossible to operate in compliance with international human rights law or that pose undue risks to the enjoyment of human rights."

Warnings against the technology have become increasingly prevalent, particularly when it comes to generative AI tools and the risks they pose for democracy and society, particularly via fake images and speech shared in a bid to interfere in elections.

UN Secretary-General Antonio Guterres has made AI regulation a priority, calling for the creation of a UN entity modelled on other UN organizations such as the International Atomic Energy Agency (IAEA).

He has regularly highlighted the potential for disinformation and last week warned of bias in technologies designed mainly by men, which can result in algorithms that ignore the rights and needs of women.

"Male-dominated algorithms could literally program inequalities into activities from urban planning to credit ratings to medical imaging for years to come," he said.

Gowan of the International Crisis Group said he didn't "think the US wants Guterres leading this conversation, because it is so sensitive" and was therefore "stepping in to shape the debate."

A race is underway between various UN member states, the United States, China and South Korea, to be at the forefront of the issue.

In October, the White House unveiled rules intended to ensure that the United States leads the way in AI regulation, with President Joe Biden insisting on the need to govern the technology.

While most comments will be posted if they are on-topic and not abusive, moderation decisions are subjective. Published comments are readers’ own views and The Business Standard does not endorse any of the readers’ comments.

SOURCE: https://www.tbsnews.net/tech/un-general-assembly-address-ais-potential-risks-rewards-812486
 
ChatGPT is an invaluable tool that has seamlessly integrated into my daily routine. In fact, this very post has been crafted using ChatGPT. Additionally, I frequently rely on AI technology to enhance blurry images, which significantly streamlines my workflow and eliminates unnecessary hassle.
 
Did any try the AI music generator app suno? I've been dabbling with it for last few days and its very impressive.
 
didn't know there was one im a failed music composer might be handy
I am integrated with GitHub Copilot and GPT4 for my workflows. I was always interested in becoming a DJ and composer too. But the tool is more like a music generator and helps in brainstroming. Not perfect yet.
 
I am integrated with GitHub Copilot and GPT4 for my workflows. I was always interested in becoming a DJ and composer too. But the tool is more like a music generator and helps in brainstroming. Not perfect yet.
ye I've just tried it, I'm looking for something where I can put in a vocal and music is generated in the same key
 

OpenAI's Altman pitches ChatGPT Enterprise to large firms, including some Microsoft customers​


OpenAI Chief Executive Sam Altman has hosted hundreds of Fortune 500 company executives in San Francisco, New York and London this month where he and other OpenAI executives pitched AI services for corporate use, going head to head in some cases with financial backer Microsoft, attendees told Reuters.

The roadshow-like events illustrate how the company credited with sparking the explosion of generative artificial intelligence with its consumer offering, is looking to grow new sources of revenue from corporates all over the world - some of it potentially on the home turf of its biggest partner.

The three meetings with senior corporate executives - two in the U.S. last week and one in London on Monday - have not been reported previously.

Altman directly addressed more than 100 executives in each city at the events, according to attendees who spoke on the condition of anonymity.

At each event, Altman and chief operating officer Brad Lightcap offered product demonstrations, including ChatGPT Enterprise, the enterprise grade of its famous chatbot that generates text from simple prompts, software to connect customer applications to its AI services known as APIs, and its new text-to-video models.

OpenAI has promised that ChatGPT Enterprise customers' data will not be used to train its models. Talking to potential customers from industries including finance, healthcare and energy, OpenAI executives highlighted a range of applications, such as call-center management and translation. They noted the consumer version of its chatbot is already in use by more than 92% of Fortune 500 companies.

Microsoft, the largest investor in OpenAI, offers access to OpenAI's technology through its Azure cloud and by selling Microsoft 365 Copilot, a productivity tool powered by OpenAI's models targeting enterprises.

Some executives in the audience at the events asked why they should pay for ChatGPT Enterprise if they are already customers of Microsoft, attendees said.

Altman and Lightcap responded that paying for the enterprise service allowed them to work with the OpenAI team directly, have access to the latest models and more opportunity to get customized AI products, according to attendees present.

OpenAI and Microsoft declined to comment.

OpenAI, last valued at $86 billion in a secondary sale, has been trying to diversify its revenue stream since its chatbot ChatGPT quickly gained popularity in late 2022. It is on track to achieve the $1 billion revenue target it projected for 2024, sources have said.

While trying to build out new products of consumers such as the marketplace ChatGPT stores, the company expects selling to enterprises to become a more meaningful part of its revenue.

Lightcap told Bloomberg last week more than 600,000 people signed up to use ChatGPT Enterprise and Team, up from around 150,000 in January.

Lightcap, the main OpenAI executive focused on enterprise adoption, has also spent time in Hollywood talking to studio executives to promote the company's Sora video creation tool.

That technology, which can create and refine videos based on a user's text description, has caused both excitement and anxiety within the creative industry.

Two major Hollywood studios told Reuters they are seeking early access to begin exploring applications, though there are some concerns about the source of the video used to train Sora, the reliability of the output and its ability to protect copyrighted works.

Fox and News Corp also hosted Altman at a leadership retreat last October, where he took part in a question-and-answer session, according to one source with knowledge of the session.

 
Chat Gpt is going to run into massive lawsuits.

Stackoverflow and youtube screwed up if they allowed their data models to be used by Open AI.
 
Kaspersky (an international cybersecurity company) has warned that companies and consumers must be aware of ‘deepfake’ videos and other social engineering attacks, which will become a serious threat in future.

Hafeez Rehman, Technical group manager at Kaspersky told this scribe that the research has found the availability of ‘deepfake’ creation tools and services on darknet marketplaces. These services offer generative AI video creation for a variety of purposes, including fraud, blackmail, and stealing confidential data. According to the estimates by Kaspersky experts, prices per one minute of a ‘deepfake’ video can be purchased for as little as $300.

He stated that the widespread adoption of Artificial Intelligence (AI) and machine learning technologies in recent years is providing threat actors with sophisticated new tools to perpetrate their attacks. One of these is ‘deepfakes’ which include generated human-like speech or photo and video replicas of people. Kaspersky warned that companies and consumers must be aware that ‘deepfakes’ will likely become more of a concern in the future.

According to the recent Kaspersky Business Digitisation Survey, 51% of employees surveyed in the META region said they could tell a ‘deepfake’ from a real image, however in a test only 25% could actually distinguish a real image from an AI-generated one. This puts organisations at risk given how employees are often the primary targets of phishing and other social engineering attacks.

For example, cyber criminals can create a fake video of a CEO requesting a wire transfer or authorising a payment, which can be used to steal corporate funds. Compromising videos or images of individuals can be created, which can be used to extort money or information from them.

“Despite the technology for creating high-quality ‘deepfakes’ not being widely available yet, one of the most likely use cases that will come from this is to generate voices in real-time to impersonate someone. It’s important to remember that ‘deepfakes’ are a threat not only to businesses, but also to individual users - they spread misinformation, are used for scams, or to impersonate someone without consent – and are a growing cyber threat to be protected from,” Hafeez said.

Kaspersky recommended people and businesses to be aware of the key characteristics of ‘deepfake videos’. A solution such as Kaspersky Threat Intelligence can assist keeping information security specialists up to date on the most recent developments in the ‘deepfake’ game. Companies should also strengthen the human firewall by ensuring their employees understand what they see, Hafeez added.

SOURCE: https://www.brecorder.com/news/4029...ny-warns-of-serious-threat-of-deepfake-videos
 

Report: Israel used AI to identify bombing targets in Gaza​

Lavender, an artificial intelligence tool developed for the war, marked 37,000 Palestinians as suspected Hamas operatives.

The system, called Lavender, was developed in the aftermath of Hamas’ October 7th attacks, the report claims. At its peak, Lavender marked 37,000 Palestinians in Gaza as suspected “Hamas militants” and authorized their assassinations.

Choosing targets​

To build the Lavender system, information on known Hamas and Palestinian Islamic Jihad operatives was fed into a dataset — but, according to one source who worked with the data science team that trained Lavender, so was data on people loosely affiliated with Hamas, such as employees of Gaza’s Internal Security Ministry. “I was bothered by the fact that when Lavender was trained, they used the term ‘Hamas operative’ loosely, and included people who were civil defense workers in the training dataset,” the source told +972.
Lavender was trained to identify “features” associated with Hamas operatives, including being in a WhatsApp group with a known militant, changing cellphones every few months, or changing addresses frequently. That data was then used to rank other Palestinians in Gaza on a 1–100 scale based on how similar they were to the known Hamas operatives in the initial dataset. People who reached a certain threshold were then marked as targets for strikes. That threshold was always changing “because it depends on where you set the bar of what a Hamas operative is,” one military source told +972.
The system had a 90 percent accuracy rate, sources said, meaning that about 10 percent of the people identified as Hamas operatives weren’t members of Hamas’ military wing at all. Some of the people Lavender flagged as targets just happened to have names or nicknames identical to those of known Hamas operatives; others were Hamas operatives’ relatives or people who used phones that had once belonged to a Hamas militant. “Mistakes were treated statistically,” a source who used Lavender told +972. “Because of the scope and magnitude, the protocol was that even if you don’t know for sure that the machine is right, you know statistically that it’s fine. So you go for it.”

Collateral damage​

Intelligence officers were given wide latitude when it came to civilian casualties, sources told +972. During the first few weeks of the war, officers were allowed to kill up to 15 or 20 civilians for every lower-level Hamas operative targeted by Lavender; for senior Hamas officials, the military authorized “hundreds” of collateral civilian casualties, the report claims.
Suspected Hamas operatives were also targeted in their homes using a system called “Where’s Daddy?” officers told +972. That system put targets generated by Lavender under ongoing surveillance, tracking them until they reached their homes — at which point, they’d be bombed, often alongside their entire families, officers said. At times, however, officers would bomb homes without verifying that the targets were inside, wiping out scores of civilians in the process. “It happened to me many times that we attacked a house, but the person wasn’t even home,” one source told +972. “The result is that you killed a family for no reason.”

AI-driven warfare​

Mona Shtaya, a non-resident fellow at the Tahrir Institute for Middle East Policy, told The Verge that the Lavender system is an extension of Israel’s use of surveillance technologies on Palestinians in both the Gaza Strip and the West Bank.
Shtaya, who is based in the West Bank, told The Verge that these tools are particularly troubling in light of reports that Israeli defense startups are hoping to export their battle-tested technology abroad.
Since Israel’s ground offensive in Gaza began, the Israeli military has relied on and developed a host of technologies to identify and target suspected Hamas operatives. In March, The New York Times reported that Israel deployed a mass facial recognition program in the Gaza Strip — creating a database of Palestinians without their knowledge or consent — which the military then used to identify suspected Hamas operatives. In one instance, the facial recognition tool identified Palestinian poet Mosab Abu Toha as a suspected Hamas operative. Abu Toha was detained for two days in an Israeli prison, where he was beaten and interrogated before being returned to Gaza.
Another AI system, called “The Gospel,” was used to mark buildings or structures that Hamas is believed to operate from. According to a +972 and Local Call report from November, The Gospel also contributed to vast numbers of civilian casualties. “When a 3-year-old girl is killed in a home in Gaza, it’s because someone in the army decided it wasn’t a big deal for her to be killed — that it was a price worth paying in order to hit [another] target,” a military source told the publications at the time.
“We need to look at this as a continuation of the collective punishment policies that have been weaponized against Palestinians for decades now,” Shtaya said. “We need to make sure that war times are not used to justify the mass surveillance and mass killing of people, especially civilians, in places like Gaza.”



This is unreal.
 
Back
Top