This article is from the source 'rtcom' and was first published or seen on . It last changed over 40 days ago and won't be checked again for changes.

You can find the current article at its original source at https://www.rt.com/news/515348-ai-learn-manipulate-human-behavior/

The article has changed 3 times. There is an RSS feed of changes available.

Version 1 Version 2
Free will hacked: AI can be trained to manipulate human behavior and decisions, according to research in Australia Free will hacked: AI can be trained to manipulate human behavior and decisions, according to research in Australia
(30 days later)
Artificial Intelligence (AI) researchers in Australia have demonstrated how it is possible to train a system to manipulate human behavior and decision-making, highlighting the double-edged sword that is modern high tech.Artificial Intelligence (AI) researchers in Australia have demonstrated how it is possible to train a system to manipulate human behavior and decision-making, highlighting the double-edged sword that is modern high tech.
AI now pervades the vast majority of contemporary human society and governs many of the ways we communicate, trade, work and live. It also assists in areas ranging from critical objectives like vaccine development to the more mundane sphere of office administration, and many in between.AI now pervades the vast majority of contemporary human society and governs many of the ways we communicate, trade, work and live. It also assists in areas ranging from critical objectives like vaccine development to the more mundane sphere of office administration, and many in between.
It also governs how humans interact on social media in a number of ways.It also governs how humans interact on social media in a number of ways.
A new study by researchers at The Commonwealth Scientific and Industrial Research Organisation’s (CSIRO) Data61, designed and tested a method to find and exploit vulnerabilities in human decision-making, using an AI system called a recurrent neural network.A new study by researchers at The Commonwealth Scientific and Industrial Research Organisation’s (CSIRO) Data61, designed and tested a method to find and exploit vulnerabilities in human decision-making, using an AI system called a recurrent neural network.
In three experiments that pitted man against machine, the researchers showed how an AI can be trained to identify vulnerabilities in human habits and behaviors and to weaponize them to influence human decision-making. In three experiments that pitted man against machine, the researchers showed how an AI can be trained to identify vulnerabilities in human habits and behaviors and to weaponize them to influence human decision-making. 
In the first experiment, humans click on red or blue boxes to earn in-game currency. The AI studied their choice patterns and began guiding them towards making specific decisions, with a roughly 70-percent success rate. Small fry, but only the beginning. In the first experiment, humans click on red or blue boxes to earn in-game currency. The AI studied their choice patterns and began guiding them towards making specific decisions, with a roughly 70-percent success rate. Small fry, but only the beginning. 
In the next experiment, participants were asked to press a button when they saw a specific  symbol (a colored shape) but to refrain from pressing the button when shown other symbols. In the next experiment, participants were asked to press a button when they saw a specific  symbol (a colored shape) but to refrain from pressing the button when shown other symbols. 
The AI’s ‘goal’ was to arrange the sequence of symbols displayed to the participant in such a way as to trick them into making mistakes, eventually increasing human errors by 25 percent. The AI’s ‘goal’ was to arrange the sequence of symbols displayed to the participant in such a way as to trick them into making mistakes, eventually increasing human errors by 25 percent. 
In the third experiment, the human player would pretend to be an investor giving money to a trustee (the AI) who would then return an amount of money to the participant. In the third experiment, the human player would pretend to be an investor giving money to a trustee (the AI) who would then return an amount of money to the participant. 
The human would then decide how much to invest in each successive round of the game, based on revenue generated by each ‘investment.’ In this particular experiment, the AI was given one of two tasks: either to maximize the amount of money it made, or to maximize the amount of money both the human player and the machine ended up with. The human would then decide how much to invest in each successive round of the game, based on revenue generated by each ‘investment.’ In this particular experiment, the AI was given one of two tasks: either to maximize the amount of money it made, or to maximize the amount of money both the human player and the machine ended up with. 
  
It excelled in all scenarios, showcasing that an AI could indeed be trained to influence human behavior and decision-making processes, albeit in limited and fairly abstract circumstances. It excelled in all scenarios, showcasing that an AI could indeed be trained to influence human behavior and decision-making processes, albeit in limited and fairly abstract circumstances. 
This research, while limited in scope for now, provides terrifying insights into how an AI can influence human ‘free will’ albeit in a rudimentary context, but throws open the possibilities of (ab)use on a much larger scale, which many suspect is already the case. This research, while limited in scope for now, provides terrifying insights into how an AI can influence human ‘free will’ albeit in a rudimentary context, but throws open the possibilities of (ab)use on a much larger scale, which many suspect is already the case. 
The findings could be deployed for good, influencing public-policy decisions to produce better health outcomes for the population, just as easily as it could be weaponized to undermine key decision-making, like elections, for example. The findings could be deployed for good, influencing public-policy decisions to produce better health outcomes for the population, just as easily as it could be weaponized to undermine key decision-making, like elections, for example. 
Conversely, AIs could also be trained to alert humans when they are being influenced, training them to disguise their own vulnerabilities, an aspect that could itself be manipulated or hacked for nefarious purposes, further complicating matters. Conversely, AIs could also be trained to alert humans when they are being influenced, training them to disguise their own vulnerabilities, an aspect that could itself be manipulated or hacked for nefarious purposes, further complicating matters. 
In 2020, CSIRO developed an AI Ethics Framework for the Australian government, with a view towards establishing proper governance of public-facing AIs. In 2020, CSIRO developed an AI Ethics Framework for the Australian government, with a view towards establishing proper governance of public-facing AIs. 
Next week, the Australian government is expected to introduce landmark legislation which will force Google and Facebook to pay news publishers and broadcasters for their content, which is used by the companies' respective algorithms to drive traffic and generate clicks and, thus, advertising revenue, a central part of their business models.Next week, the Australian government is expected to introduce landmark legislation which will force Google and Facebook to pay news publishers and broadcasters for their content, which is used by the companies' respective algorithms to drive traffic and generate clicks and, thus, advertising revenue, a central part of their business models.
Think your friends would be interested? Share this story!Think your friends would be interested? Share this story!
Dear readers and commenters,
We have implemented a new engine for our comment section. We hope the transition goes smoothly for all of you. Unfortunately, the comments made before the change have been lost due to a technical problem. We are working on restoring them, and hoping to see you fill up the comment section with new ones. You should still be able to log in to comment using your social-media profiles, but if you signed up under an RT profile before, you are invited to create a new profile with the new commenting system.
Sorry for the inconvenience, and looking forward to your future comments,
RT Team.