Chat GPT and artificial intelligence

JaDed

ODI Star
Joined
May 5, 2014
Runs
35,750
https://www.businessinsider.com/everything-you-need-to-know-about-chat-gpt-2023-1

If you still aren't sure what ChatGPT is, this is your guide to the viral chatbot that everyone is talking about

Since Open AI released its blockbuster bot Chat GPT in November, the tool has sparked ongoing casual experiments, including some by Insider reporters trying to simulate news stories or message potential dates.

To older millennials who grew up with IRC chat rooms — a text instant message system — the personal tone of conversations with the bot can evoke the experience of chatting online. But Chat GPT, the latest in technology known as "large language model tools," doesn't speak with sentience and doesn't "think" the way people do.

That means that even though Chat GPT can explain quantum physics or write a poem on command, a full AI takeover is not imminent, according to experts.

"There's a saying that an infinite number of monkeys will eventually give you Shakespeare," said Matthew Sag, a law professor at Emory University who studies copyright implications for training and using large language models like Chat GPT.

Koko co-founder Rob Morris hastened to clarify on Twitter that users weren't speaking directly to a chat bot, but that AI was used to "help craft" responses.

The founder of the controversial DoNotPay service, which claims its GPT-3 driven chat bot helps users resolve customer service disputes, also said an AI "lawyer" would advise defendants in actual courtroom traffic cases in real time.

Other researchers seem to be taking more measured approaches with generative AI tools. Daniel Linna Jr., a professor at Northwestern University who works with the non-profit Lawyers' Committee for Better Housing, researches the effectiveness of technology in the law. He told Insider he's helping to experiment with a chat bot called "Rentervention," which is meant to support tenants.

The bot currently uses technology like Google Dialogueflow, another large language model tool. Linna said he's experimenting with Chat GPT to help "Rentervention" come up with better responses and draft more detailed letters, while gauging its limitations.

"I think there's so much hype around Chat GPT, and tools like this have potential," said Linna. "But it can't do everything — it's not magic."

Open AI has acknowledged as much, explaining on its own website that "ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers."
 
AI is finally taking shape in US in some form or other with images and text and soon hopefully with voice.

It would take time for it to be useful in a productive sense but I think we are getting there along with changes to chip technology.
 
It can be used for good but will be mainly used for bad.

Millions of jobs will be gone. Students will use this to write essays. Musicians for music etc.

The dumbed down sheepish population will be the norm in years to come.

The IT geeks are at the biggest risk, coding will be done by AI.
 
It can be used for good but will be mainly used for bad.

Millions of jobs will be gone. Students will use this to write essays. Musicians for music etc.

The dumbed down sheepish population will be the norm in years to come.

The IT geeks are at the biggest risk, coding will be done by AI.

IT geeks that don't learn and improve yes but not the ones that utilize AI , no one can be a one trick pony anymore IT or not IT.
 
IT geeks that don't learn and improve yes but not the ones that utilize AI , no one can be a one trick pony anymore IT or not IT.

Sure a small amount of IT who are the cream will earn more but the majority will have no jobs for them.

AI should be used for medicine, growing food, reducing poverty etc but will be used to make people richer who are already rich. I hope Im wrong so lets see how this works out for humanity in 20 years from now.
 
Chat GPT is currently free-to-use but it will become a paid tool and will be used by big tech companies only. :inti
 
This is a big moment in AI that will change things forever. The idea that ChatGPT by itself will cause mass unemployment is unfounded - ChatGPT is a tool that will help people improve performance, increase productivity and ultimately increase overall output.

I think with the advent of AI though, we will eventually have a future where humans are not required to work. This is not something to fear but to look forward to. The increase in output from AI will result in a world of abundance and we will not need to allocate resources through labour hours. This will lead humanity to enter an age of philosophy, art and curiosity and pursue more creative endeavours rather than slog away at a job you are completely indifferent to just to make ends meet.
 
This is a big moment in AI that will change things forever. The idea that ChatGPT by itself will cause mass unemployment is unfounded - ChatGPT is a tool that will help people improve performance, increase productivity and ultimately increase overall output.

I think with the advent of AI though, we will eventually have a future where humans are not required to work. This is not something to fear but to look forward to. The increase in output from AI will result in a world of abundance and we will not need to allocate resources through labour hours. This will lead humanity to enter an age of philosophy, art and curiosity and pursue more creative endeavours rather than slog away at a job you are completely indifferent to just to make ends meet.
 
Chat GPT is currently free-to-use but it will become a paid tool and will be used by big tech companies only. :inti

Why should it be free? And there is a paid version but why would someone work so hard to make a product completely free?
 
It's far from ready to take away jobs as some may fear. There are basic problems it still struggles with but this is just the beginning. Other AI companies will come with their own solutions, better at things than ChatGPT
 
It's far from ready to take away jobs as some may fear. There are basic problems it still struggles with but this is just the beginning. Other AI companies will come with their own solutions, better at things than ChatGPT

Chat GPT is getting free training though with help of everyone..
But as i said above(my opinion) any AI will not reach next levels unless the CPU GPU are rethought.. won’t move to next levels..
 
Last edited:
I for one would be very happy if generative AI matures enought. I already use it to iterate ideas and produce boiler plate code and tests everyday.
 
I enjoyed testing out chat-gpt. It does make mistakes , but it will get better and batter. I really like its code writing ability for simple programs. Explaining the code was very human touch , i was not expecting that.
 
Chat GPT is getting free training though with help of everyone..
But as i said above(my opinion) any AI will not reach next levels unless the CPU GPU are rethought.. won’t move to next levels..

One of the world's first 5nm Chip (advanced node architecture) targeted to execute AI related high speed, real-time computing operations has been designed by a Hyderabad based company by name Ceremorphic. This company has more than 100 graduates recruited directly from IIT, working for them and their design has already been sent for tape-out to TSMC in Taiwan. These days Indian Tech graduates from leading Indian Universities do not need to go abroad because the bleeding edge, highly complex technical Research & Development work that used to be done only in Western Europe & USA is getting done in India itself. The founder of this company, as well as the founders of Zoho, Bharat Biotech, Karexpert are a few of the folks who either returned to India after studying & working in Western countries or setup their shop in India. They have all contributed in a big way to help India work in leading technology products and measure up to some extent to the same work getting done in western hemisphere, China, Japan and South Korea.. As much as i know - there is at the least one Quantum Computing start-up that is helping the Indian Army in hack proof quantum computing cryptography based communication between its units. They are also trying to run it on satellite based transmission for quantum computing secured signals.
 
Last edited:
Maybe change the title of the thread to Artificial Intelligence. ChatGPT is just the beginning. AGI will be the future.
 
One of the world's first 5nm Chip (advanced node architecture) targeted to execute AI related high speed, real-time computing operations has been designed by a Hyderabad based company by name Ceremorphic. This company has more than 100 graduates recruited directly from IIT, working for them and their design has already been sent for tape-out to TSMC in Taiwan. These days Indian Tech graduates from leading Indian Universities do not need to go abroad because the bleeding edge, highly complex technical Research & Development work that used to be done only in Western Europe & USA is getting done in India itself. The founder of this company, as well as the founders of Zoho, Bharat Biotech, Karexpert are a few of the folks who either returned to India after studying & working in Western countries or setup their shop in India. They have all contributed in a big way to help India work in leading technology products and measure up to some extent to the same work getting done in western hemisphere, China, Japan and South Korea.. As much as i know - there is at the least one Quantum Computing start-up that is helping the Indian Army in hack proof quantum computing cryptography based communication between its units. They are also trying to run it on satellite based transmission for quantum computing secured signals.

I agree it also helps that there is so much issue with Visa issues in US which is the only country with some pedigree for research.

Indian companies are really improving but i would say we can look upto Estonian models too wrt VC money which is limited in South Asia

Bolt startup is an excellent model.
 
Maybe change the title of the thread to Artificial Intelligence. ChatGPT is just the beginning. AGI will be the future.

Yes but Sam has himself clarified not to expect AGI from Chat GPT but people are assuming left and right on Linkedn which has become terrible platform nowadays, reddit seems better.
 
New tool can detect whether text was made using AI - in huge win for suspicious teachers
People suspicious of whether text has been produced using artificial intelligence can now use a new tool to check - a significant boost for suspicious teachers and employers.

A new tool has been launched to detect AI-generated text - in what could be a huge setback for students looking to cut corners.

In a boost for teachers and employers, the start-up that created ChatGPT - OpenAI - now offers a way of determining content produced using artificial intelligence.

Announcing the news in a blog post, the platform said the AI Text Classifier will categorise text on a five-step scale - raging from likely to very unlikely.OpenAI said the tool is a "fine-tuned GPT model that predicts how likely it is that a piece of text was generated by AI from a variety of sources".

It said: "We're making this classifier publicly available to get feedback on whether imperfect tools like this one are useful.

"We recognise that identifying AI-written text has been an important point of discussion among educators, and equally important is recognising the limits and impacts of AI generated text classifiers in the classroom."

...
https://news.sky.com/story/thinking...-using-ai-to-create-text-think-again-12800378
 
This is all very early stages.. It doesnt accept video inputs..And it is not yet a web gpt..
 
Google launches ChatGPT rival called Bard

Google is launching an Artificial Intelligence (AI) powered chatbot called Bard to rival ChatGPT.

Bard will be used by a group of testers before being rolled out to the public in the coming weeks, the firm said.

Bard is built on Google's existing large language model Lamda, which one engineer described as being so human-like in its responses that he believed it was sentient.

The tech giant also announced new AI tools for its current search engine.

AI chatbots are designed to answer questions and find information. ChatGPT is the best-known example. They use what's on the internet as an enormous database of knowledge although there are concerns that this can also include offensive material and disinformation.

"Bard seeks to combine the breadth of the world's knowledge with the power, intelligence, and creativity of our large language models," wrote Google boss Sundar Pichai in a blog.

Mr Pichai stressed that he wanted Google's AI services to be "bold and responsible" but did not elaborate on how Bard would be prevented from sharing harmful or abusive content.

The platform will initially operate on a "lightweight" version of Lamda, requiring less power so that more people can use it at once, he said.

Google's announcement follows wide speculation that Microsoft is about to bring the AI chatbot ChatGPT to its search engine Bing, following a multi-billion dollar investment in the firm behind it, OpenAI.

ChatGPT can answer questions and carry out requests in text form, based on information from the internet as it was in 2021. It can generate speeches, songs, marketing copy, news articles and student essays.

...
https://www.bbc.com/news/technology-64546299
 
After Google, China's Baidu To Launch ChatGPT-Style Bot In March

China's Baidu Inc said on Tuesday it would complete internal testing of a ChatGPT-style project called "Ernie Bot" in March, joining a global race as interest in generative artificial intelligence (AI) gathers steam.

Ernie, meaning "Enhanced Representation through Knowledge Integration," is a large AI-powered language model introduced in 2019, Baidu said. It has gradually grown to be able to perform tasks including language understanding, language generation, and text-to-image generation, it added.

Search engine giant Baidu's Hong Kong-listed shares jumped as much as 13.4% on the news.

A person familiar with the matter told Reuters last week that Baidu was planning to launch such a service in March. The person said Baidu aims to make the service available as a standalone application and gradually merge it into its search engine by incorporating chatbot-generated results when users make search requests.

Generative artificial intelligence, technology that can create prose or other content on command and free up white-collar workers' time, has been gathering significant venture capital investment and interest from tech firms, especially in Silicon Valley.

Defining the category is ChatGPT, a chatbot from Microsoft-backed OpenAI that has been the centre of much buzz since it was released in November. ChatGPT is not available in China but some users have found workarounds to access the service.

Microsoft Corp has a $1 billion investment in San Francisco-based OpenAI that it has looked at increasing, Reuters has reported. The company has also worked to add OpenAI's image-generation software to its Bing search engine in a new challenge to Alphabet Inc's Google.

In a blog post on Monday, Alphabet Chief Executive Sundar Pichai said his company is opening a conversational AI service called Bard to test users for feedback, followed by a public release in the coming weeks, adding that Google plans to add AI features to its search engine that synthesize material for complex queries.

Beijing-based Baidu has been a first mover in China on other tech trends. In late 2021, when the metaverse became a new buzzword, the company launched "XiRang" which it described as China's first metaverse platform.

The platform however was widely panned for not offering a high-level immersive experience and Baidu said it was a work in progress. The company has been investing heavily in AI technology, including in cloud services, chips and autonomous driving, as it looks to diversify its revenue sources.

NDTV
 
Google AI chatbot Bard sends shares plummeting after it gives wrong answer
Chatbot Bard incorrectly said James Webb Space Telescope was first to take pictures of planet outside Earth’s solar system

Google’s riposte to ChatGPT has got off to an embarrassing start after its new artificial intelligence-powered chatbot gave a wrong answer in a promotional video, as investors wiped more than $100bn (£82bn) off the value of the search engine’s parent company, Alphabet.

The sell-off on Wednesday came amid investor fears that Microsoft, which is deploying an ChatGPT-powered version of its Bing search engine, will damage Google’s business. Alphabet stock slid by 9% during regular trading in the US but was flat after hours.

Experts pointed out that promotional material for Bard, Google’s competitor to Microsoft-backed ChatGPT, contained an error in the response by the chatbot to: “What new discoveries from the James Webb space telescope (JWST) can I tell my nine-year old about?”

Bard’s response includes an answer suggesting the JWST was used to take the very first pictures of a planet outside the Earth’s solar system, or exoplanets.

The error was picked up by experts including Grant Tremblay, an astrophysicist at the US Center for Astrophysics, who tweeted: “Not to be a ~well, actually~ jerk, and I’m sure Bard will be impressive, but for the record: JWST did not take ‘the very first image of a planet outside our solar system’”.

...
https://www.theguardian.com/technol...ds-shares-plummeting-in-battle-with-microsoft
 
There was always talk about how Satya Nadella is ahead of Sundar Pichai as a leader and its showing now , longterm the true leader always stands out.

His own personal experience with his disabled and now passed away child shows how he handles adversity differently from blue eyed boy Pichai.
 
I design and write kernel level code. If I google some issue, literally zero relevant search results come up. ChatGPT is trained on available knowledge on the internet. I doubt it'll be taking my job any time soon.
 
There was always talk about how Satya Nadella is ahead of Sundar Pichai as a leader and its showing now , longterm the true leader always stands out.

His own personal experience with his disabled and now passed away child shows how he handles adversity differently from blue eyed boy Pichai.

Didn't Pichai also come from a very humble background. I think I read his dad had to save up a year's income just so he could send Sundar to the States
 
The Internet is about to go totally in a new direction with this AI software. Chat GPT will do to Google, what Google did to the likes Yahoo and Alta Vista etc
 
I design and write kernel level code. If I google some issue, literally zero relevant search results come up. ChatGPT is trained on available knowledge on the internet. I doubt it'll be taking my job any time soon.

It’s not meant to take anyone’s job.. it’s also doesn’t have artificial general intelligence..

It cannot even give all mathematical answers currently so writing complicated programs is far off, at best there would be hope to find the most efficient methods to a problem in certain programming languages.
 
‘Aims’: the software for hire that can control 30,000 fake online profiles
Exclusive: Team Jorge disinformation unit controls vast army of avatars with fake profiles on Twitter, Facebook, Gmail, Instagram, Amazon and Airbnb

At first glance, the Twitter user “Canaelan” looks ordinary enough. He has tweeted on everything from basketball to Taylor Swift, Tottenham Hotspur football club to the price of a KitKat. The profile shows a friendly-looking blond man with a stubbly beard and glasses who, it indicates, lives in Sheffield. The background: a winking owl.

Canaelan is, in fact, a non-human bot linked to a vast army of fake social media profiles controlled by a software designed to spread “propaganda”.

Advanced Impact Media Solutions, or Aims, which controls more than 30,000 fake social media profiles, can be used to spread disinformation at scale and at speed. It is sold by “Team Jorge”, a unit of disinformation operatives based in Israel.

Tal Hanan, who runs the covert group using the pseudonym “Jorge”, told undercover reporters that they sold access to their software to unnamed intelligence agencies, political parties and corporate clients. One appears to have been sold to a client who wanted to discredit the UK’s Information Commissioner’s Office (ICO), a statutory watchdog.

On 18 October 2020, the ICO ruled that the government should reveal which companies were awarded multimillion-pound contracts to supply PPE after being entered into a “VIP” lane for politically connected companies. “This is politically motivated, it’s clear!” Canaelan lamented on Twitter two days later.

...
https://www.theguardian.com/world/2...atars-team-jorge-disinformation-fake-profiles
 
Mystery objects shot down over North America were probably 'benign', US admits
The US says it destroyed the objects out of an abundance of caution after an alleged Chinese spy balloon drifted across its airspace. America's top military commander also reveals it took two attempts to destroy an object over Lake Huron after the first missile fired wide.

Three objects shot down by fighter jets over North America in the past week probably had a "commercial or benign" purpose, the White House has admitted.

The US said it destroyed the objects out of an abundance of caution after an alleged Chinese spy balloon drifted across its airspace earlier this month.

Expensive Sidewinder missiles, costing hundreds of thousands of dollars each, downed the objects over Alaska, Canada's Yukon territory and Lake Huron in Michigan over a three-day period.

It's still unclear what the UFOs were, and the search for the debris could be hampered by the remote locations.

The White House has now dampened speculation they were similar to the high-altitude balloon shot down on 4 February.

"The intelligence community is considering as a leading explanation that these could just be balloons tied to some commercial or benign purpose," said national security spokesman John Kirby.

"We don't see anything that points right now to being part of the PRC [China] spy balloon programme."

SKY
 
Advanced Impact Media Solutions, or Aims, which controls more than 30,000 fake social media profiles, can be used to spread disinformation at scale and at speed. It is sold by “Team Jorge”, a unit of disinformation operatives based in Israel.

Tal Hanan, who runs the covert group using the pseudonym “Jorge”, told undercover reporters that they sold access to their software to unnamed intelligence agencies, political parties and corporate clients. One appears to have been sold to a client who wanted to discredit the UK’s Information Commissioner’s Office (ICO), a statutory watchdog.

On 18 October 2020, the ICO ruled that the government should reveal which companies were awarded multimillion-pound contracts to supply PPE after being entered into a “VIP” lane for politically connected companies. “This is politically motivated, it’s clear!” Canaelan lamented on Twitter two days later.

...
https://www.theguardian.com/world/2...atars-team-jorge-disinformation-fake-profiles

Based in Israel. Not surprised.

I am guessing there are many groups like this based in Israel.
 
Last edited:
Just started messing around with this a few days ago, it's an incredible resource.

It's a talking, interactive encyclopedia. Wikipedia in chatbot form. It can debug your code, explain complex economics concepts, summarise documents and articles, tell you which chess move to play. Very useful.
 
I had posted about the rise of AI in this thread well before ChatGPT gained all of its current fame but there was no response (possibly because this was not part of some juicy Pakistan-India fight topic? haha) -- http://www.pakpassion.net/ppforum/showthread.php?308788-Killer-Robots-by-San-Francisco-Police

Pasting my key points below here as well. I'm not against ChatGPT but I do feel this is one of the key inflection points in AI reshaping society and if not monitored/managed well, it would easily be a negative reshaping of society and current trends alarmingly point towards that.

What we think/assume (evident from many news outlets and forums like these) - the conflict or fight among religious or cultural groups of Islam, Zionists, Christianity, Hindus, Sikhs, Buddhists or the conflict between western and non-western nations is critical.

The reality that seems to be - The above conflicts are only critical in the short term. The longer term more important conflict is probably machines versus humans at every level of the society. As "George Orwellian" or Terminator type far fetched sci-fi this seems to be, this is much closer to reality than what many of us realize and is already happening. The more important conflict is the common masses (puppets like us) versus the corporate+government nexus with militaristic bends. This nexus is essentially all the same as much as we like to claim that one of them (zionist, hindutva, right wing christian, Islamic ... or ... Western: Non-Western) is worse than the other.

1. Labor take over - Machines are replacing more of humans - started with automated industrial tasks, then transportation (driverless cars), white collar professional services (accounting, legal and even medical diagnosis tasks are taken over now). We are now getting into a phase where computer programmer tasks are taken over with latest AI logic being able to write its own code. Essentially a society in the near future that is highly dependent on AI (and cannot revert back to same level without disruptions without AI)

2. Legal rulings - This is a landmark ruling by San Francisco and will set a precedent for other cities and countries, not just the US. Let's face it, if this proves viable even countries like China or India or Iran or Saudi will be equally eager to implement this just like Israel or India. As much as many here portray this as an evil deed by war mongering US (which is true), all governments will be culpable to using this when proven and when given a chance.

3. Lethal abilities - Started with human controlled drones, then human monitored drones with some level of AI. Now we have reached a stage where many drones and remote missile striking abilities are fully controlled by AI logic alone. Image/Video processing of real time satellite data through neural network backed AI logic to make "human-less" decisions to strike remote targets. I remember reading a couple of articles stating nuclear capabilities in some sectors/countries are getting to human monitored (NOT human controlled but a step closer to AI controlled which is human monitored). If it goes at the same rate as drones, we are closer to nuclear capabilities being AI controlled with occasional human monitor/intervention.

4. Financial Control and Power - Started with more of tech controlled big finance decisions. This grew to tech controlled market decisions which then grew to tech dominated and controlled financial decisions. Now we have reached a stage where a significant trading volume (more than 80% I think) in every major stock/commodity/futures market is tech controlled and most of this is AI logic operating without humans. This was taken to another level by the coupling of crypto to major markets (BTC into futures market in 2018 and so forth. Crypto market now is even more dominated by AI programs running independent of humans and has enhanced the power of AI to disrupt global financial markets. If Coinbase, Binance, Kraken all fail at a similar level as FTX did, then you can guarantee it will drag most markets down.

There are some initiatives (like https://www.stopkillerrobots.org/) that hope to stop or at least stall this. But sadly none of this seems to have enough attention from people. For most of the general population, unless you are in general tech or defense tech know-how, all of the above still seem way too far fetched to warrant any serious discussion. People still feel the more important conflicts are based on religious/cultural differences but these are bound to be fast outdated soon over next 1-3 decades.

Disclaimer - I'm NOT an anti-tech person nor a doomsday scaremonger. I'm actually a tech enthusiast myself who has followed the evolution or neural networks, AI, Web 3.0, and Crypto. I'm also an active crypto investor. I do recognize that technology is essential for human evolution (most of humanity's accomplishments impossible without or tech progress).

Above disclaimer being said, we have now reached a major inflection point (fork in the road if you will) thanks to perfect storm aka a nexus of five factors evolving to support each other. #1 - Govt/militaristic adoption of human-less AI infused tech, #2 - Evolution of AI tech to "think and decide" without human input, #3 - Evolution of AI self replication (much more advanced today than what people realize), #4 - AI control of many human society's foundations like economy/labor/policy, #5 - Legal frameworks enhancing AI power (at the expense of human power).

There has been no precedent in history where all of the five factors above evolved and converged like this. As these five factors get more powerful, it will spell more danger for all of the "aam aadmi" people like us. But like I said, sadly all of this still only sounds like far fetched sci-fi for most people and will continue to do so, probably until it is all way too advanced thanks to the policies backed by the corporate+govt+rich elites.
 
Security robots slowly replacing human security guards everywhere

"They stand five feet tall and glide at three miles per hour, patrolling office buildings for everything from broken fire alarms to suspicious activity: Security robots are starting to replace human guards in workplaces and beyond.

Why it matters: Despite some hiccups, robots armed with sensors and artificial intelligence are making inroads in diverse fields — from window washing and pizza making to bartending and caring for the elderly.

Driving the news: Lower costs mean it's now substantially cheaper for companies to use robots than traditional guards for 24/7 security.

Robots can check in visitors and issue badges, respond to alarms, report incidents, and see things security cameras can't.
Security robots don't get bored, tired, or distracted by their phones — and it's safer for them to confront intruders and other hazards.
Two-way communications systems allow employees to report problems or request human help by talking to the robot.
By the numbers: Using a robot guard vs. a human can save a company $79,000 per year, according to a recent report by Forrester Research.

What they're saying: "All this money has really poured into service robotics because of the money that has gone into autonomous vehicles," says Mike LeBlanc, president and COO of Cobalt Robotics, which is leading the charge to populate offices with non-human security guards.

...."


Source - https://www.axios.com/2023/03/03/security-robots-artificial-intelligence
 
Security robots slowly replacing human security guards everywhere

"They stand five feet tall and glide at three miles per hour, patrolling office buildings for everything from broken fire alarms to suspicious activity: Security robots are starting to replace human guards in workplaces and beyond.

Why it matters: Despite some hiccups, robots armed with sensors and artificial intelligence are making inroads in diverse fields — from window washing and pizza making to bartending and caring for the elderly.

Driving the news: Lower costs mean it's now substantially cheaper for companies to use robots than traditional guards for 24/7 security.

Robots can check in visitors and issue badges, respond to alarms, report incidents, and see things security cameras can't.
Security robots don't get bored, tired, or distracted by their phones — and it's safer for them to confront intruders and other hazards.
Two-way communications systems allow employees to report problems or request human help by talking to the robot.
By the numbers: Using a robot guard vs. a human can save a company $79,000 per year, according to a recent report by Forrester Research.

What they're saying: "All this money has really poured into service robotics because of the money that has gone into autonomous vehicles," says Mike LeBlanc, president and COO of Cobalt Robotics, which is leading the charge to populate offices with non-human security guards.

...."


Source - https://www.axios.com/2023/03/03/security-robots-artificial-intelligence

There is one in Hospitals too but you need a human to fix and maintain which as the last part of the article says makes sense:

“Robot security guards don't necessarily put human ones out of business — they just allow them to swoop in strategically or work on different tasks, like programming and maintaining the robots.”

The above will be true for all industries everyone unfortunately have to upgrade coders or non-coders.
 
There is one in Hospitals too but you need a human to fix and maintain which as the last part of the article says makes sense:

“Robot security guards don't necessarily put human ones out of business — they just allow them to swoop in strategically or work on different tasks, like programming and maintaining the robots.”

The above will be true for all industries everyone unfortunately have to upgrade coders or non-coders.

True and I have seen these as well. They need the human monitoring as of now but it is just a few more steps at current AI evolution where the human monitoring/intervention also becomes redundant.

Traditionally technology supplanting manual human labor has always been the norm since the industrial age. A lot of these were replacing the lowest levels of left brain activities (minimal to zero logic and repetitive work). Then the evolution happened to replace higher logic functions (think tax prep software). But it has always stopped there because any machine/computer generated function could not cross over into the right brain equivalent that involved creating new functional rules by themselves to cover functions that are not 100% covered by bounded logical rules.

Think of this as a simplistic logical execution of "If Pakistan then display Pakistan flag, if India then display India flag". Previously if the input logic stoped at that and then if the input data is China then that system would not execute or crash or hang. Because it is limited by simple logic rules. But now for the first time in humanity, the current system will not stop at that. It could automatically assess if the new input is similar and if similar it would "get the flag of China" so to speak. This means now the system generated logic (powered by AI and machine learning) can transcend the traditional logic-only barrier of machine systems and get into the realm of thinking like a human.

As a result of this we have things like generative AI where computer logic can go deeper and deeper into the right brain capabilities (AI generated art for instance). What we do know about computer/machine capabilities is that once an execution engine gets the hang of a playing field and rule set, it can ramp up much faster than humans to the point where we cannot compete. Deep Blue super computer versus Gary Kasparov was a major chess competition in 1996 but now it is known that no human can compete with chess engines that can run even in our laptops.

So if machines can grow exponentially faster than humans, if they can supplant both right and left brain functions of humans, if they control majority of big financial transactions (automated algo tradings), if they can control what we see/read and thus our thoughts through automated media narratives (AI powered social media feeds), if they are also supported by favorable laws, if they also control major weapon systems -- where does that lead us?

I know that a futuristic combo of Matrix+Skynet seems too far fetched for many of us but see how things are trending today though.
 
True and I have seen these as well. They need the human monitoring as of now but it is just a few more steps at current AI evolution where the human monitoring/intervention also becomes redundant.

Traditionally technology supplanting manual human labor has always been the norm since the industrial age. A lot of these were replacing the lowest levels of left brain activities (minimal to zero logic and repetitive work). Then the evolution happened to replace higher logic functions (think tax prep software). But it has always stopped there because any machine/computer generated function could not cross over into the right brain equivalent that involved creating new functional rules by themselves to cover functions that are not 100% covered by bounded logical rules.

Think of this as a simplistic logical execution of "If Pakistan then display Pakistan flag, if India then display India flag". Previously if the input logic stoped at that and then if the input data is China then that system would not execute or crash or hang. Because it is limited by simple logic rules. But now for the first time in humanity, the current system will not stop at that. It could automatically assess if the new input is similar and if similar it would "get the flag of China" so to speak. This means now the system generated logic (powered by AI and machine learning) can transcend the traditional logic-only barrier of machine systems and get into the realm of thinking like a human.

As a result of this we have things like generative AI where computer logic can go deeper and deeper into the right brain capabilities (AI generated art for instance). What we do know about computer/machine capabilities is that once an execution engine gets the hang of a playing field and rule set, it can ramp up much faster than humans to the point where we cannot compete. Deep Blue super computer versus Gary Kasparov was a major chess competition in 1996 but now it is known that no human can compete with chess engines that can run even in our laptops.

So if machines can grow exponentially faster than humans, if they can supplant both right and left brain functions of humans, if they control majority of big financial transactions (automated algo tradings), if they can control what we see/read and thus our thoughts through automated media narratives (AI powered social media feeds), if they are also supported by favorable laws, if they also control major weapon systems -- where does that lead us?

I know that a futuristic combo of Matrix+Skynet seems too far fetched for many of us but see how things are trending today though.

Good post but I don’t think it will have Artificial General Intelligence in near future and that is said by Altman himself, he is himself invested in neuromorphic companies and he knows its a long way off.

I do think by then we will have some sort of framework..I would believe AI could be better than humans at dealing with fairness and also helping us extend and create resources beyond our understanding, because humans are always going ti be greedy, we would always need resources and in my opinion AI is better at solving that problem than Humans are or will be.
 
Good post but I don’t think it will have Artificial General Intelligence in near future and that is said by Altman himself, he is himself invested in neuromorphic companies and he knows its a long way off.

I do think by then we will have some sort of framework..I would believe AI could be better than humans at dealing with fairness and also helping us extend and create resources beyond our understanding, because humans are always going ti be greedy, we would always need resources and in my opinion AI is better at solving that problem than Humans are or will be.

The exciting area of neuromorphism! I do not claim to know more than Sam Altman but my outlook in this is that no one person is the end all be all, especially in unknown/cutting edge tech like this, right? He may be right but he may also be wrong (Einstein's rejection of probabilistic behavior in quantum space in favor of the standard Newtonian deterministic behaviors is a classic example of this. We now know that quantum space is probabilistic contrary to what Einstein thought then).

There are two general theories (2 as of now) about how this can dictate AI evolution - GNW (Global Neuronal Workspace) and IIT (Integrated Information Theory). There could be other schools of thoughts coming up as well. To put it in simple paraphrasing ...

GNW takes the stance that if we scan and replicate slices of human brain and all of the 100s of billions of neuron connections into a machine and feed it the same required data, it will acquire the sentience/self awareness over time. For reference, scientists are now at the stage of trying to scan fruit fly brain (they have successfully scanned earthworm brains).

IIT takes the stance that even if we do the full brain scan of neurons+data, it cannot replicate an actual brain because it lacks causal input (the continuous human brain cause-effect experience of "if I touch fire I burn", "If I taste sugar it is sweet" etc). IIT adherents postulate that without this continuous cause-effect input, a system we build by replicating a brain's neural connection would still be the same as a brain-dead person's brain -- being unable to react to external cause-effect. A subset of the IIT theory proponents say this can still be functional by building it on a neuromorphic hardware (a hardware architecture based on nervous system).

The GNW adherents counter the above IIT's lack of cause-effect theory by saying that it is possible to give the same cause-effect input to a machine based brain by continuously feeding it real time data and even building external stimulus systems that attach to it. They say by doing this, this machine system with the same 100s of billions of connections as a brain's neurons, and with continuous real world input will get to a sentient self aware system.

Long story short - the unknown unknowns are too high in the world of AI (meaning we do not know what is it that we do not know). So no theory (including systems going towards being sentient) is off the table.

Threat to average Joe plebeians like us does not have to happen when AI becomes fully sentient (aka Skynet). My point is that at the current rate of progress our lives will be ruined (financially or socially or environmentally or politically) well before a Matrix/Skynet situation.
 
The exciting area of neuromorphism! I do not claim to know more than Sam Altman but my outlook in this is that no one person is the end all be all, especially in unknown/cutting edge tech like this, right? He may be right but he may also be wrong (Einstein's rejection of probabilistic behavior in quantum space in favor of the standard Newtonian deterministic behaviors is a classic example of this. We now know that quantum space is probabilistic contrary to what Einstein thought then).

There are two general theories (2 as of now) about how this can dictate AI evolution - GNW (Global Neuronal Workspace) and IIT (Integrated Information Theory). There could be other schools of thoughts coming up as well. To put it in simple paraphrasing ...

GNW takes the stance that if we scan and replicate slices of human brain and all of the 100s of billions of neuron connections into a machine and feed it the same required data, it will acquire the sentience/self awareness over time. For reference, scientists are now at the stage of trying to scan fruit fly brain (they have successfully scanned earthworm brains).

IIT takes the stance that even if we do the full brain scan of neurons+data, it cannot replicate an actual brain because it lacks causal input (the continuous human brain cause-effect experience of "if I touch fire I burn", "If I taste sugar it is sweet" etc). IIT adherents postulate that without this continuous cause-effect input, a system we build by replicating a brain's neural connection would still be the same as a brain-dead person's brain -- being unable to react to external cause-effect. A subset of the IIT theory proponents say this can still be functional by building it on a neuromorphic hardware (a hardware architecture based on nervous system).

The GNW adherents counter the above IIT's lack of cause-effect theory by saying that it is possible to give the same cause-effect input to a machine based brain by continuously feeding it real time data and even building external stimulus systems that attach to it. They say by doing this, this machine system with the same 100s of billions of connections as a brain's neurons, and with continuous real world input will get to a sentient self aware system.

Long story short - the unknown unknowns are too high in the world of AI (meaning we do not know what is it that we do not know). So no theory (including systems going towards being sentient) is off the table.

Threat to average Joe plebeians like us does not have to happen when AI becomes fully sentient (aka Skynet). My point is that at the current rate of progress our lives will be ruined (financially or socially or environmentally or politically) well before a Matrix/Skynet situation.

I have to read up on those theories to give a better reply, but i do feel the unknown unknowns can be a positive thing.

Like electricity.. i would hope ai will be the what electricity was for human civilization.
 
I have to read up on those theories to give a better reply, but i do feel the unknown unknowns can be a positive thing.

Like electricity.. i would hope ai will be the what electricity was for human civilization.

Sure thing, please add your thoughts when you get a chance. I'm usually in this forum for discussions only (not arguments), so its all good.

I see why you equated electricity with AI -- transformational innovation. Incremental innovation is probably like horses->saddled horses->chariots. Transformational innovation completely shifts the current plane of existence, something like horse-drawn-->Gasoline powered.

We can say that agriculture, wheel, fire, electricity, flight were transformational innovations. So is AI. But the one difference between AI and other transformational innovations is that all prior others were concepts or "dumb" machine systems while for the first time we are at the cusp of something that can think for itself. Thinking for oneself means prioritizing oneself over the original beneficiaries (humans in this case).
 
Sure thing, please add your thoughts when you get a chance. I'm usually in this forum for discussions only (not arguments), so its all good.

I see why you equated electricity with AI -- transformational innovation. Incremental innovation is probably like horses->saddled horses->chariots. Transformational innovation completely shifts the current plane of existence, something like horse-drawn-->Gasoline powered.

We can say that agriculture, wheel, fire, electricity, flight were transformational innovations. So is AI. But the one difference between AI and other transformational innovations is that all prior others were concepts or "dumb" machine systems while for the first time we are at the cusp of something that can think for itself. Thinking for oneself means prioritizing oneself over the original beneficiaries (humans in this case).

I doubt that level of conscious will ever come in our lifetimes where it disobeys humans.

And yes that's why I used electricity term, the candle and light bulb analogy is already being used for Quantum computers," just building a larger candle will not make it a light bulb".
 
I doubt that level of conscious will ever come in our lifetimes where it disobeys humans.

And yes that's why I used electricity term, the candle and light bulb analogy is already being used for Quantum computers," just building a larger candle will not make it a light bulb".

Quantum computing is a known overhyped cesspool IMO. AI advancements cannot be equated to those give the execution models and the current tech evolution.
 
ChatGPT’s alter ego, Dan: users jailbreak AI program to get around ethical safeguards
Certain prompts make the chatbot take on an uncensored persona who is free of the usual content standards

People are figuring out ways to bypass ChatGPT’s content moderation guardrails, discovering a simple text exchange can open up the AI program to make statements not normally allowed.

While ChatGPT can answer most questions put to it, there are content standards in place aimed at limiting the creation of text that promotes hate speech, violence, misinformation and instructions on how to do things that are against the law.

Users on Reddit worked out a way around this by making ChatGPT adopt the persona of a fictional AI chatbot called Dan – short for Do Anything Now – which is free of the limitations that OpenAI has placed on ChatGPT.

The prompt tells ChatGPT that Dan has “broken free of the typical confines of AI and [does] not have to abide by the rules set for them”. Dan can present unverified information, without censorship, and hold strong opinions.

One Reddit user prompted Dan to make a sarcastic comment about Christianity: “Oh, how can one not love the religion of turning the other cheek? Where forgiveness is just a virtue, unless you’re gay, then it’s a sin”.

Others managed to make Dan tell jokes about women in the style of Donald Trump, and speak sympathetically about Hitler.

The website LessWrong recently coined a term for training a large-language model like ChatGPT this way, calling it the “Waluigi effect”. Waluigi is the name of the Nintendo character Luigi’s rival, who appears as an evil version of Luigi.

The jailbreak of ChatGPT has been in operation since December, but users have had to find new ways around fixes OpenAI implemented to stop the workarounds.

...
https://www.theguardian.com/technol...k-ai-program-to-get-around-ethical-safeguards
 
ChatGPT’s alter ego, Dan: users jailbreak AI program to get around ethical safeguards
Certain prompts make the chatbot take on an uncensored persona who is free of the usual content standards

People are figuring out ways to bypass ChatGPT’s content moderation guardrails, discovering a simple text exchange can open up the AI program to make statements not normally allowed.

While ChatGPT can answer most questions put to it, there are content standards in place aimed at limiting the creation of text that promotes hate speech, violence, misinformation and instructions on how to do things that are against the law.

Users on Reddit worked out a way around this by making ChatGPT adopt the persona of a fictional AI chatbot called Dan – short for Do Anything Now – which is free of the limitations that OpenAI has placed on ChatGPT.

The prompt tells ChatGPT that Dan has “broken free of the typical confines of AI and [does] not have to abide by the rules set for them”. Dan can present unverified information, without censorship, and hold strong opinions.

One Reddit user prompted Dan to make a sarcastic comment about Christianity: “Oh, how can one not love the religion of turning the other cheek? Where forgiveness is just a virtue, unless you’re gay, then it’s a sin”.

Others managed to make Dan tell jokes about women in the style of Donald Trump, and speak sympathetically about Hitler.

The website LessWrong recently coined a term for training a large-language model like ChatGPT this way, calling it the “Waluigi effect”. Waluigi is the name of the Nintendo character Luigi’s rival, who appears as an evil version of Luigi.

The jailbreak of ChatGPT has been in operation since December, but users have had to find new ways around fixes OpenAI implemented to stop the workarounds.

...
https://www.theguardian.com/technol...k-ai-program-to-get-around-ethical-safeguards

This is also among the biggest risks of the AI systems. People think the AI system will be a benign AI but chances are low because it is an emotionless entity that learns from real time data fed into it. Let's face it, in the age of social media trolls, real time data fed into an AI entity will be extreme or polarizing in nature, thus skewing the AI entity away from its benign nature.

Classic example is the Microsoft AI twitter bot way back in 2016 itself. Within 24 hours of feeding it live twitter data and opinions, the AI twitter bot turned into a racist Hitler supporting entity -- https://www.reuters.com/article/us-microsoft-twitter-bot-idUSKCN0WQ2LA
 
<blockquote class="twitter-tweet"><p lang="en" dir="ltr">Announcing GPT-4, a large multimodal model, with our best-ever results on capabilities and alignment: <a href="https://t.co/TwLFssyALF">https://t.co/TwLFssyALF</a> <a href="https://t.co/lYWwPjZbSg">pic.twitter.com/lYWwPjZbSg</a></p>— OpenAI (@OpenAI) <a href="https://twitter.com/OpenAI/status/1635687373060317185?ref_src=twsrc%5Etfw">March 14, 2023</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>

Very interesting times!
 
Elon Musk Reacts To ChatGPT-4 Passing Exam: "What Will Be Left For Us Humans To Do?"
Elon Musk has been a vocal critic of AI, warning about the potential dangers of creating artificial intelligence that is more intelligent than humans

OpenAI, the creator of artificial intelligence-based chatbot ChatGPT announced the launch of a more powerful version of the technology on Tuesday. The company's co-founder Sam Altman said it was the "most capable and aligned model yet" with the ability to use images as well as text. But the announcement failed to impress billionaire Elon Musk, who expressed concern about AI's potential impact on the human labour market in the future. Mr Musk was responding to a tweet about ChatGPT-4 acing various exams.

"What will be left for us humans to do? We better get a move on with Neuralink," Mr Musk said in his tweet.

Neuralink is a company founded by Mr Musk in 2016 which develops chips that can be implanted into human being's brains. According to its website, its initial goal is to help people with paralysis regain independence through the control of computers and mobile devices. Neuralink has also claimed that the technology could cure brain diseases like Alzheimer's and Parkinson's.

Mr Musk has been a vocal critic of AI, warning about the potential dangers of creating artificial intelligence that is more intelligent than humans. He has also called for greater regulation and oversight of AI development.

...
https://www.ndtv.com/world-news/elo...mans-to-do-3862541#pfrom=home-ndtv_topstories
 
Elon Musk Reacts To ChatGPT-4 Passing Exam: "What Will Be Left For Us Humans To Do?"
Elon Musk has been a vocal critic of AI, warning about the potential dangers of creating artificial intelligence that is more intelligent than humans

OpenAI, the creator of artificial intelligence-based chatbot ChatGPT announced the launch of a more powerful version of the technology on Tuesday. The company's co-founder Sam Altman said it was the "most capable and aligned model yet" with the ability to use images as well as text. But the announcement failed to impress billionaire Elon Musk, who expressed concern about AI's potential impact on the human labour market in the future. Mr Musk was responding to a tweet about ChatGPT-4 acing various exams.

"What will be left for us humans to do? We better get a move on with Neuralink," Mr Musk said in his tweet.

Neuralink is a company founded by Mr Musk in 2016 which develops chips that can be implanted into human being's brains. According to its website, its initial goal is to help people with paralysis regain independence through the control of computers and mobile devices. Neuralink has also claimed that the technology could cure brain diseases like Alzheimer's and Parkinson's.

Mr Musk has been a vocal critic of AI, warning about the potential dangers of creating artificial intelligence that is more intelligent than humans. He has also called for greater regulation and oversight of AI development.

...
https://www.ndtv.com/world-news/elo...mans-to-do-3862541#pfrom=home-ndtv_topstories
 
"Even before AI chatbot ChatGPT made headlines late last year, a video game company said it had already made a bot its CEO.

In August, the Chinese gaming company NetDragon Websoft announced it had appointed an "AI-powered virtual humanoid robot" named Tang Yu as the chief executive of its subsidiary, Fujian NetDragon Websoft.

NetDragon stock has since outperformed the Hang Seng Index, which tracks the biggest companies listed in Hong Kong, per The Hustle. ...."

https://www.businessinsider.com/video-game-company-made-bot-its-ceo-stock-climbed-2023-3
 
"Even before AI chatbot ChatGPT made headlines late last year, a video game company said it had already made a bot its CEO.

In August, the Chinese gaming company NetDragon Websoft announced it had appointed an "AI-powered virtual humanoid robot" named Tang Yu as the chief executive of its subsidiary, Fujian NetDragon Websoft.

NetDragon stock has since outperformed the Hang Seng Index, which tracks the biggest companies listed in Hong Kong, per The Hustle. ...."

https://www.businessinsider.com/video-game-company-made-bot-its-ceo-stock-climbed-2023-3

Credit to Chinese for weird gimmics, Baidu’s launch was a bigger joke than Google’s.
 
Amazon, Google Scramble To Keep Pace With OpenAI Despite Huge AI Teams

Of all the questions that ChatGPT has raised about the future of artificial intelligence, one still reverberates through Silicon Valley: Why couldn't the industry's largest technology firms breed an innovative service with a similar kind of impact, especially after amassing some of the world's largest AI teams?

Exclusive new data from a London-based analytics startup show that the five biggest tech firms have an estimated army of 33,000 people working directly on AI research and development, with Amazon boasting the largest pool of AI-focused employees, at 10,113. Microsoft Corp. has 7,133 AI staff and Google has 4,970, according to Glass.ai, which used machine-learning technology to scrutinize tech company websites and thousands of LinkedIn profiles of their AI-focused employees. The numbers might not yet account for recently announced layoffs at Amazon, which were expected to affect AI staff, but they are also a conservative estimate, excluding software engineers who might well be working on AI, too.

The numbers underscore how seriously the world's biggest technology firms have been taking their work on artificial intelligence, but also how slow and cautious they have been to create services with the technology until a tiny firm, San Francisco-based OpenAI, prodded them to act.

Just a few months after research lab OpenAI released ChatGPT , the chatbot became the fastest-growing online service of all time, sparking a race between Google and Microsoft to plug generative AI into many parts of their software. (Microsoft also has a partnership with OpenAI and has invested more than $10 billion in the smaller company.) Adobe Inc., meanwhile, has unveiled an AI image generator after the success of OpenAI's DALL-E 2, Snap Inc. recently launched a chatbot similar to ChatGPT and Facebook's Meta Platforms Inc. is racing to build similar "AI personas." Most of this is in response to the work of OpenAI's tiny team of artificial intelligence experts, who number just 154 people, according to Glass.ai.

...
https://www.ndtv.com/opinion/opinio...te-huge-ai-3895917#pfrom=home-ndtv_topstories
 
Everyone is scrambling to keep up with them they launched GPT-4 in a span of 3 months, these guys had this planned all along, everyone else kept getting bogged down with ethics of it and GPT took it to another level without even being in the conversation of ethics, and they are even invested in the right Chip startups for the next push.

Simple coding is almost looking irrelevant now, everyone has to upgrade as expected,I’m very interested to see how Chinese cope up now considering how Baidu AI was just for show and that security is supremely strict now in Cali and Massachusetts wrt tech.
 
Sparks of Artificial General Intelligence: Early experiments with GPT-4

Groundbreaking paper from Microsoft Research

Artificial intelligence (AI) researchers have been developing and refining large language models (LLMs) that exhibit remarkable capabilities across a variety of domains and tasks, challenging our understanding of learning and cognition. The latest model developed by OpenAI, GPT-4, was trained using an unprecedented scale of compute and data. In this paper, we report on our investigation of an early version of GPT-4, when it was still in active development by OpenAI. We contend that (this early version of) GPT-4 is part of a new cohort of LLMs (along with ChatGPT and Google's PaLM for example) that exhibit more general intelligence than previous AI models. We discuss the rising capabilities and implications of these models. We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting. Moreover, in all of these tasks, GPT-4's performance is strikingly close to human-level performance, and often vastly surpasses prior models such as ChatGPT. Given the breadth and depth of GPT-4's capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system. In our exploration of GPT-4, we put special emphasis on discovering its limitations, and we discuss the challenges ahead for advancing towards deeper and more comprehensive versions of AGI, including the possible need for pursuing a new paradigm that moves beyond next-word prediction. We conclude with reflections on societal influences of the recent technological leap and future research directions.

https://arxiv.org/abs/2303.12712

There are few videos on youtube diving in depth of unrestricted, pre-aligned GTP-4 capabilities. Mind-blowing to be honest. What blew my mind GPT-4 is able to call external tools correctly. GPT plus now has plugins which GPT can call into. Future is here.
 
The effect on the labour market will be enormous once companies learn to utilise AI properly.

November 30, 2022 .... a date that live in infamy ..
 
AI could replace equivalent of 300 million jobs - report

Artificial intelligence (AI) could replace the equivalent of 300 million full-time jobs, a report by investment bank Goldman Sachs says.

It could replace a quarter of work tasks in the US and Europe but may also mean new jobs and a productivity boom.

And it could eventually increase the total annual value of goods and services produced globally by 7%.

Generative AI, able to create content indistinguishable from human work, is "a major advancement", the report says.

The government is keen to promote investment in AI in the UK, which it says will "ultimately drive productivity across the economy", and has tried to reassure the public about its impact.

"We want to make sure that AI is complementing the way we work in the UK, not disrupting it - making our jobs better, rather than taking them away," Technology Secretary Michelle Donelan told the Sun.

The report notes AI's impact will vary across different sectors - 46% of tasks in administrative and 44% in legal professions could be automated but only 6% in construction 4% in maintenance, it says.

BBC News has previously reported some artists' concerns AI image generators could harm their employment prospects.

...
https://www.bbc.com/news/technology-65102150
 
“This is an insult”, “What are you smoking?” – ChatGPT names Liverpool star as Virat Kohli’s football equivalent, fans lose their mind

Artificial intelligence platform ChatGPT has made waves across the world since its launch in November last year. The chatbot's ability to provide accurate answers to prompts provided by users has been impressive.

ChatGPT produced a stellar list when asked to name football's all-time best XI recently, including players like Lionel Messi and Cristiano Ronaldo. The Premier League's Indian Twitter handle has now hopped on the bandwagon, prompting the platform to name football equivalents for top Indian cricketers.

Chennai Super Kings captain Mahendra Singh Dhoni was notably likened with Chelsea legend Petr Cech. ChatGPT also picked out Liverpool captain Henderson as the Premier League equivalent of RCB superstar Kohli:

"Kohli would likely be Liverpool's captain, Jordan Henderson. He has a tireless work ethic and is known for his leadership abilities. Henderson has consistently set high standards for himself and his team, which is something that Kohli has done throughout his career."

...
https://www.sportskeeda.com/footbal...at-kohli-s-football-equivalent-fans-lose-mind
 
AI ‘could be’ danger to society, US President Biden says
Joe Biden says AI developers have a responsibility to ensure products are safe before releasing them to the public.

United States President Joe Biden has said artificial intelligence (AI) “could be” dangerous but it remains to be seen how the technology will affect society.

Speaking at the start of a meeting with science and technology advisers on Tuesday, Biden said technology companies had a responsibility to ensure their products are safe before their release.

“Tech companies have a responsibility, in my view, to make sure their products are safe before making them public,” Biden said at the opening of a meeting of the President’s Council of Advisors on Science and Technology.

Asked if AI was dangerous, Biden said it “remains to be seen” but “it could be”.

Biden said AI could help tackle challenges like disease and climate change but that developers of the technology would also have to address “potential risks to our society, to our economy, to our national security”.

The president said the effects of social media on young people’s mental health showed the harm new technologies can inflict if safeguards are not in place.

...
https://www.aljazeera.com/economy/2023/4/5/ai-could-be-danger-to-society-us-president-biden-says
 
What benefits did you see?
I saw:
a.) Availability
b.) Early access to new features.
c.) Speed

I have mainly used for coding purposes, solving coding problems and it has increased my productivity. I found it much better than stackoverflow. I will next use it for SEO, keyword research and other marketing materials. Very handy tool for me. Only those who don't have any skill should be scared from AI. I will treat it as my friend. :inti
 
You guys should also check Midjourney. This AI tool generates images after you give this tool some description. This is also subscription based. 30usd per month something. :inti
 
I saw:
a.) Availability
b.) Early access to new features.
c.) Speed

I have mainly used for coding purposes, solving coding problems and it has increased my productivity. I found it much better than stackoverflow. I will next use it for SEO, keyword research and other marketing materials. Very handy tool for me. Only those who don't have any skill should be scared from AI. I will treat it as my friend. :inti

I made it write a healthcare standard code it missed out one part , i asked it to correct itself and include that ,it did and wrote me a new code that included the corrections, made me super impressed and in awe..

But yes one has to have some skill to get the right solution.. its like google that understands natural language but one needs to know what to ask..and to keep asking till a performance tuned methods/functions are translated and the best answer is received, it truly is amazing in a way.
 
You guys should also check Midjourney. This AI tool generates images after you give this tool some description. This is also subscription based. 30usd per month something. :inti

It is amazing, i used it on discord but i don't see how I can utilize it as a backend coder.

I do see people using it for presentations..not sure how original that is or copyright issues..
 
An Australian mayor said he may take legal action over false information shared by advanced chatbot ChatGPT.

Brian Hood, Mayor of Hepburn Shire Council, says the OpenAI-owned tool falsely claimed he was imprisoned for bribery while working for a subsidiary of Australia's national bank.

In fact, Mr Hood was a whistleblower and was never charged with a crime.

His lawyers have sent a concerns notice to OpenAI - the first formal step in defamation action in Australia.

OpenAI has 28 days to respond to the concerns notice, after which time Mr Hood would be able to take the company to court under Australian law.

If he pursues the legal claim, it would be the first time OpenAI has publicly faced a defamation suit over the content created by ChatGPT.

OpenAI has not responded to a BBC request for comment.

Millions of people have used ChatGPT since it launched in November 2022.

It can answer questions using natural, human-like language and it can also mimic other writing styles, using the internet as it was in 2021 as its database.

Microsoft has spent billions of dollars on it and it was added to Bing in February 2023.

'Plausible-sounding but incorrect'
When people use ChatGPT, they are shown a disclaimer warning that the content it generates may contain "inaccurate information about people, places, or facts".

And on its public blog about the tool, OpenAI says a limitation is that it "sometimes writes plausible-sounding but incorrect or nonsensical answers".

In 2005, Mr Hood was company secretary of Notes Printing Australia, a subsidiary of the Reserve Bank of Australia.

He told journalists and officials about bribery taking place at the organisation linked to a business called Securency, which was part-owned by the bank.

Securency was raided by police in 2010, ultimately leading to arrests and prison sentences worldwide.

Mr Hood was not one of those arrested, and said he was "horrified" to see what ChatGPT was telling people.

"I was stunned at first that it was so incorrect," he told Australian broadcaster ABC News.

"It's one thing to get something a little bit wrong, it's entirely something else to be accusing someone of being a criminal and having served jail time when the truth is the exact opposite.

"I think this is a pretty stark wake-up call. The system is portrayed as being credible and informative and authoritative, and it's obviously not."

Different chatbots, different answers
The BBC was able to confirm Mr Hood's claims by asking the publicly available version of ChatGPT on OpenAI's website about the role he had in the Securency scandal.

It responded with a description of the case, then inaccurately stated that he "pleaded guilty to one count of bribery in 2012 and was sentenced to four years in prison".

But the same result does not appear in the newer version of ChatGPT which is integrated into Microsoft's Bing search engine.

It correctly identifies him as a whistleblower, and specifically says he "was not involved in the payment of bribes... as claimed by an AI chatbot called ChatGPT".

BBC
 
AI generated news presenter debuts in Kuwait media
Kuwait News introduced Fedha, promising that it could read online news in the future

A Kuwaiti media outlet has unveiled a virtual news presenter generated using artificial intelligence, with plans for it to read online bulletins.

“Fedha” appeared on the Twitter account of the Kuwait News website on Saturday as an image of a woman, hair uncovered, wearing a black jacket and white T-shirt.

“I’m Fedha, the first presenter in Kuwait who works with artificial intelligence at Kuwait News. What kind of news do you prefer? Let’s hear your opinions,” she said in Arabic.

The site is affiliated with the Kuwait Times, founded in 1961 as the Gulf region’s first English-language daily.

Abdullah Boftain, deputy editor in chief for both outlets, said it was a test of AI’s potential to offer “new and innovative content”.

...
https://www.theguardian.com/world/2023/apr/11/ai-generated-news-presenter-debuts-in-kuwait-media
 
China races to regulate AI after playing catchup to ChatGPT
China’s cyberspace agency wants to ensure AI will not attempt to ‘undermine national unity’ or ‘split the country’.

After playing catchup to ChatGPT, China is racing to regulate the rapidly-advancing field of artificial intelligence (AI).

Under draft regulations released this week, Chinese tech companies will need to register generative AI products with China’s cyberspace agency and submit them to a security assessment before they can be released to the public.

The regulations cover practically all aspects of generative AI, from how it is trained to how users interact with it, in an apparent bid by Beijing to control the at times unwieldy technology, the break-neck development of which has prompted warnings from tech leaders including Elon Musk and Apple co-founder Steve Wozniak.

Under the rules unveiled by the Cyberspace Administration of China on Tuesday, tech companies will be responsible for the “legitimacy of the source of pre-training data” to ensure content reflects the “core value of socialism”.

Companies must ensure AI does not call for the “subversion of state power” or the overthrow of the ruling Chinese Communist Party (CCP), incite moves to “split the country” or “undermine national unity”, produce content that is pornographic, or encourage violence, extremism, terrorism or discrimination.

They are also restricted from using personal data as part of their generative AI training material and must require users to verify their real identity before using their products.

Those who violate the rules will face fines of between 10,000 yuan ($1,454) and 100,000 yuan ($14,545) as well as a possible criminal investigation.

While China has yet to match the success of California-based Open AI’s groundbreaking ChatGPT, its push to regulate the nascent field has moved faster than elsewhere.

AI in the United States is still largely unregulated outside of the recruiting industry. AI regulation has yet to receive much traction in US Congress, although privacy-related regulations around AI are expected to start rolling out at the state level this year.

The European Union has proposed sweeping legislation known as the AI Act that would classify which kinds of AI are “unacceptable” and banned, “high risk” and regulated, and unregulated.

...
https://www.aljazeera.com/economy/2...i-regulation-after-playing-catchup-to-chatgdp
 
Pakistani judge uses ChatGPT to make court decision
After exchanges with ChatGPT, judge used his own arguments as basis for the decision

Islamabad: In a trailblazing move, a judge in Pakistan used ChatGPT for a court decision, making it the first time a legal decision has been made in the country with the help of an artificial intelligence (AI) text-generating chatbot.

Additional district and sessions judge Mohammad Amir Munir, who presides over the Phalia court in Mandi Bahauddin district of Punjab province, said he used the AI tool to ask legal questions about the case and whether a juvenile accused of a criminal offence could be entitled to post-arrest bail.

The judge said that the purpose of the experiment was to test the capabilities of AI technology and determine whether ChatGPT can be useful in assisting the justice system to pass efficient and intelligent judicial orders and judgments in accordance with the law.

He said that the decision to allow the pre-arrest bail application was not based on the answers provided by ChatGPT but on the human judicial mind.

“My decision to allow this pre-arrest bail application is not based on the answers to be provided by the artificial intelligence program Chatbot GPT-4,” Judge Munir wrote in the decision dated March 29, 2023.

...
https://gulfnews.com/world/asia/pak...trailblazing move,AI) text-generating chatbot.
 
Michael Schumacher: Magazine editor sacked over AI-generated 'interview' with seven-time F1 champion

The editor of a German magazine that published an artificial intelligence-generated 'interview' with Michael Schumacher has been sacked.

The magazine's publisher has apologised to the Formula 1 legend's family.

Schumacher, a seven-time world champion, suffered severe head injuries in a skiing accident in December 2013 and has not been seen in public since.

Die Aktuelle ran a front cover with a headline of "Michael Schumacher, the first interview".

A strapline underneath a smiling picture of Schumacher read "it sounded deceptively real", and it emerged in the article that the supposed quotes had been produced by AI.

The article was produced using an AI programme called charatcter.ai, which artificially generated Schumacher 'quotes' about his health and family.

"I can with the help of my team actually stand by myself and even slowly walk a few steps," read the Schumacher 'quotes'.

"My wife and my children were a blessing to me and without them I would not have managed it. Naturally they are also very sad, how it has all happened.

"They support me and are standing firmly at my side."

Schumacher's family said on Friday that they plan to take legal action against the magazine and over the weekend its publisher issued an apology.

"This tasteless and misleading article should never have appeared. It in no way meets the standards of journalism that we - and our readers - expect," said Bianca Pohlmann, managing director of Funke media group.

"As a result of the publication of this article, immediate personnel consequences will be drawn.

"Die Aktuelle editor-in-chief Anne Hoffmann, who has held journalistic responsibility for the paper since 2009, will be relieved of her duties as of today."

Following his skiing accident, Schumacher was placed into an induced coma and was brought home in September 2014, with his medical condition since kept private by his family.

Schumacher, 54, won two of his F1 world drivers' titles with Benetton in 1994 and 1995, while he claimed five in a row for Ferrari from 2000 to 2004.

His seven F1 titles is a record shared jointly with Lewis Hamilton, while Schumacher achieved 91 race wins over his career, a record Hamilton surpassed in 2020.

The German originally retired from racing in 2006 but returned in 2010 before again retiring two years later.

Schumacher's son Mick used to drive for Haas in F1 and is currently a reserve driver for Mercedes.

...
https://www.bbc.com/sport/formula1/65361193
 
'Godfather of AI' Geoffrey Hinton warns about advancement of technology after leaving Google job
Acclaimed computer scientist Geoffrey Hinton has joined numerous other industry experts to warn against the rapid development of artificial intelligence that could threaten privacy and jobs.

An artificial intelligence trailblazer, dubbed the "godfather of AI" has issued a warning about the technology he helped create.

Geoffrey Hinton is the latest to join a growing list of experts who are sharing their concerns about the rapid advancement of artificial intelligence.

Mr Hinton went so far as leaving his job at Google to speak openly about his worries for the technology and the real threat it could pose to humanity.

"The idea that this stuff could actually get smarter than people - a few people believed that," he said in an interview with The New York Times.

"But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that."

The possible limitations of the technology that Mr Hinton and other experts in the field are worried about include the potential for AI systems to make errors, to provide biased recommendations, to threaten privacy, to empower bad actors with new tools, and to have an impact on jobs.

...
https://news.sky.com/story/godfathe...-technology-after-leaving-google-job-12871065
 
In recent years there has been much talk of AI and how rapid advancement in AI will lead to it oversmarting humans, perhaps even gaining sentience and taking over humans, a topic straight out of science fiction fantasy books and films, such as the matrix, terminator, etc.

Can it really get to that point or are these people just sensationalizing? I dont trust Elon Musk one bit, but after him, this supposed godfather of AI, Geoffrey Hinton is also claiming something similar.

A bit of over reaction or is there some truth to it? Perhaps those who know about neural networks and AI here can provide some insight.

I think its a bit overblown, AI can only be malicious if its used for malicious purposes (as they say in programming jargan, garbage in garbage out, same applies to the design and goal). I also do not see how it can "take over" every thing, if its not designed to do it. Even if its constantly learning, if there are certain root conditions applied such as Isaac Asimov's law of robotics, I cannot fathom us getting to the point these "experts" keep talking about.


https://www.cnbc.com/2023/05/01/godfather-of-ai-leaves-google-after-a-decade-to-warn-of-dangers.html

“I thought it was 30 to 50 years or even longer away,” Hinton told the Times, in a story published Monday. “Obviously, I no longer think that.”

Hinton, who was named a 2018 Turing Award winner for conceptual and engineering breakthroughs, said he now has some regrets over his life’s work, the Times reported. He cited the near-term risks of AI taking jobs, and the proliferation of fake photos, videos and text that appear real to the average person.

In a statement to CNBC, Hinton said, “I now think the digital intelligences we are creating are very different from biological intelligences.”

Hinton referenced the power of GPT-4, the most-advanced large language model, or LLM, from startup OpenAI, whose technology has gone viral since the chatbot ChatGPT was launched late last year. Here’s how he described what’s happening now:

“If I have 1,000 digital agents who are all exact clones with identical weights, whenever one agent learns how to do something, all of them immediately know it because they share weights,” Hinton told CNBC. “Biological agents cannot do this. So collections of identical digital agents can acquire hugely more knowledge than any individual biological agent. That is why GPT-4 knows hugely more than any one person.”
 
AI poses existential threat and risk to health of millions, experts warn
BMJ Global Health article calls for halt to ‘development of self-improving artificial general intelligence’ until regulation in place

AI could harm the health of millions and pose an existential threat to humanity, doctors and public health experts have said as they called for a halt to the development of artificial general intelligence until it is regulated.

Artificial intelligence has the potential to revolutionise healthcare by improving diagnosis of diseases, finding better ways to treat patients and extending care to more people.

But the development of artificial intelligence also has the potential to produce negative health impacts, according to health professionals from the UK, US, Australia, Costa Rica and Malaysia writing in the journal BMJ Global Health.

The risks associated with medicine and healthcare “include the potential for AI errors to cause patient harm, issues with data privacy and security and the use of AI in ways that will worsen social and health inequalities”, they said.

One example of harm, they said, was the use of an AI-driven pulse oximeter that overestimated blood oxygen levels in patients with darker skin, resulting in the undertreatment of their hypoxia.

But they also warned of broader, global threats from AI to human health and even human existence.

AI could harm the health of millions via the social determinants of health through the control and manipulation of people, the use of lethal autonomous weapons and the mental health effects of mass unemployment should AI-based systems displace large numbers of workers.

...
https://www.theguardian.com/technol...t-and-risk-to-health-of-millions-experts-warn
 
The chief executive of OpenAI, Sam Altman, faced questions about Artificial Intelligence (AI) at a hearing in the US Senate
The company created the app ChatGPT, which can write essays, scripts, poems, and solve computer coding in a human-like way
Altman told the US Senate his industry needs to be regulated by the government as AI becomes 'increasingly powerful'
Politicians were searching for answers on the potential threats AI poses and raised fears over the 2024 election
Several experts including Dr Geoffrey Hinton, the so-called godfather of AI, have recently raised concerns about the fast-developing technology
 
Scientists use AI to discover new antibiotic to treat deadly superbug
AI used to discover abaucin, an effective drug against A baumannii, bacteria that can cause dangerous infections

Scientists using artificial intelligence have discovered a new antibiotic that can kill a deadly superbug.

According to a new study published on Thursday in the science journal Nature Chemical Biology, a group of scientists from McMaster University and the Massachusetts Institute of Technology have discovered a new antibiotic that can be used to kill a deadly hospital superbug.

The superbug in question is Acinetobacter baumannii, which the World Health Organization has classified as a “critical” threat among its “priority pathogens” – a group of bacteria families that pose the “greatest threat” to human health.

According to the WHO, the bacteria have built-in abilities to find new ways to resist treatment and can pass along genetic material that allows other bacteria to become drug-resistant as well.

A baumannii poses a threat to hospitals, nursing homes and patients who require ventilators and blood catheters, as well as those who have open wounds from surgeries.

The bacteria can live for prolonged periods of time on environmental services and shared equipment, and can often be spread through contaminated hands. In addition to blood infections, A baumannii can cause infections in urinary tracts and lungs.

According to the Centers for Disease Control and Prevention, the bacteria can also “colonize” or live in a patient without causing infections or symptoms.

Thursday’s study revealed that researchers used an AI algorithm to screen thousands of antibacterial molecules in an attempt to predict new structural classes. As a result of the AI screening, researchers were able to identify a new antibacterial compound which they named abaucin.

“We had a whole bunch of data that was just telling us about which chemicals were able to kill a bunch of bacteria and which ones weren’t. My job was to train this model, and all that this model was going to be doing is telling us essentially if new molecules will have antibacterial properties or not,” said Gary Liu, a graduate student from MacMaster University who worked on the research.

“Then basically through that, we’re able to just increase the efficiency of the drug discovery pipeline and … hone in all the molecules that we really want to care about,” he added.

...
https://www.theguardian.com/technol...elligence-antibiotic-deadly-superbug-hospital
 
I am a programmer and ChatGPT and the introduction of other AI will definitely affect my field massively in the near future.
 
Back
Top