This article is from the source 'guardian' and was first published or seen on . It last changed over 40 days ago and won't be checked again for changes.

You can find the current article at its original source at http://www.theguardian.com/technology/2015/aug/12/darpa-jazz-robot-funding-kelland-thomas

The article has changed 3 times. There is an RSS feed of changes available.

Version 0 Version 1
Robo-bop? Jazz-playing robots might one day headline a club near you Robo-bop? Jazz-playing robots might one day headline a club near you
(about 1 hour later)
The shadowy arm of the US Defense Department devoted to funding cutting-edge technology is building an interactive robotics system powerful enough to perform an incredibly difficult task: a trumpet solo.The shadowy arm of the US Defense Department devoted to funding cutting-edge technology is building an interactive robotics system powerful enough to perform an incredibly difficult task: a trumpet solo.
Defense Advanced Research Projects Agency (Darpa), the US military’s technology research arm, has handed over its first cheque to Kelland Thomas, associate director of the University of Arizona School of Information (and a jazz musician in his own right) to fund musical machines.Defense Advanced Research Projects Agency (Darpa), the US military’s technology research arm, has handed over its first cheque to Kelland Thomas, associate director of the University of Arizona School of Information (and a jazz musician in his own right) to fund musical machines.
“The goal of our research is to build a computer system and then hook it up to robots that can play instruments, and can play with human musicians in ways that we recognize as improvisational and adaptive,” said Thomas.“The goal of our research is to build a computer system and then hook it up to robots that can play instruments, and can play with human musicians in ways that we recognize as improvisational and adaptive,” said Thomas.
Machine learning is a complex field, and one that a scientist at Darpa’s Robotics Challenge in Pomona, California, earlier this year likened to “a three-day-old child”. A three-day-old child’s brain is incredibly powerful, but it doesn’t yet know how to riff like Charlie Parker. Thomas’s goal is to change all that. Machine learning is a complex field, and one that a scientist at Darpa’s Robotics Challenge in Pomona, California, earlier this year likened to “a three-day-old child”. A three-day-old child’s brain is incredibly powerful, but it doesn’t yet know how to riff like Charlie Parker. Thomas’s goal is to change that.
The program administering the $2m grant to Thomas’s team – which consists of researchers from the University of Arizona, the University of Illinois at Urbana-Champaign, and Oberlin College – is Darpa’s Information Innovation Office. It’s run by Paul Cohen, who joined Darpa in 2013 from Arizona’s School of Information that Thomas now runs, and it’s devoted to breaking down barriers between human and machine communication. The program administering the $2m grant to Thomas’s team – which consists of researchers from the University of Arizona, the University of Illinois at Urbana-Champaign, and Oberlin College – is Darpa’s Information Innovation Office. It’s run by Paul Cohen, who joined Darpa in 2013 from Arizona’s School of Information that Thomas now runs, and is devoted to breaking down barriers between human and machine communication.
“He was the head of [the School of Information Services, Technology and the Arts, or Sista] when I went part time to Sista – and then he left for Darpa,” Thomas recalled of Cohen. “He was the one who called my attention to the program after he designed the BAA [broad agency announcement – a call for proposals].”“He was the head of [the School of Information Services, Technology and the Arts, or Sista] when I went part time to Sista – and then he left for Darpa,” Thomas recalled of Cohen. “He was the one who called my attention to the program after he designed the BAA [broad agency announcement – a call for proposals].”
To jam effectively, Thomas said that machines are going to have to study data at scale and then synthesize it based on input from people; learning to play jazz does all those things.To jam effectively, Thomas said that machines are going to have to study data at scale and then synthesize it based on input from people; learning to play jazz does all those things.
“We’re getting lots of video of musicians playing in front of a green screen together,” Thomas explained. “We’re going to build a database of musical transcription: every Miles Davis solo and every Louis Armstrong solo we’re going to hand-curate. We’re going to develop machine learning techniques to analyze these solos and find deeper relationships between the notes and the harmonies, and that will inform the system – that’ll be the knowledge base.”“We’re getting lots of video of musicians playing in front of a green screen together,” Thomas explained. “We’re going to build a database of musical transcription: every Miles Davis solo and every Louis Armstrong solo we’re going to hand-curate. We’re going to develop machine learning techniques to analyze these solos and find deeper relationships between the notes and the harmonies, and that will inform the system – that’ll be the knowledge base.”
That will be the back end, said Thomas. The front end will be a microphone that listens to the human musician playing an instrument and uses its vast repository of jazz solos to make decisions about what to play next. “We want to get to a point where it’s playing things back to the human performer that the human performer will recognize as collaborative.”That will be the back end, said Thomas. The front end will be a microphone that listens to the human musician playing an instrument and uses its vast repository of jazz solos to make decisions about what to play next. “We want to get to a point where it’s playing things back to the human performer that the human performer will recognize as collaborative.”
One problem Thomas anticipates is what he describes as a version of the “uncanny valley” – the tendency of humans to find non-humanoid robots cute and too-humanoid robots creepy and weird. At first, Thomas said, “we’ll hook it up to synthesized sound. If you were to take that same output and hook it up to a midi piano, it might sound like a pianist. Once it doesn’t sound like it’s producing synthesized output, then can it fool humans. Synthesized output always tends to sound kind of mechanized.”One problem Thomas anticipates is what he describes as a version of the “uncanny valley” – the tendency of humans to find non-humanoid robots cute and too-humanoid robots creepy and weird. At first, Thomas said, “we’ll hook it up to synthesized sound. If you were to take that same output and hook it up to a midi piano, it might sound like a pianist. Once it doesn’t sound like it’s producing synthesized output, then can it fool humans. Synthesized output always tends to sound kind of mechanized.”
The researchers are a long way off from double bill of Herbie Hancock and, say, the JazzMoTron Trio at Carnegie Hall, but that is the goal, once Thomas and others developing JazzMoTron’s predecessors can teach them to move beyond trying to copy on Miles Davis. “The way that musicians learn how to play jazz, they mimic the style and the impressions – learning the style is sort of the first way into the music – and they have a knowledge base of solos they can play backwards and forwards,” said Thomas.The researchers are a long way off from double bill of Herbie Hancock and, say, the JazzMoTron Trio at Carnegie Hall, but that is the goal, once Thomas and others developing JazzMoTron’s predecessors can teach them to move beyond trying to copy on Miles Davis. “The way that musicians learn how to play jazz, they mimic the style and the impressions – learning the style is sort of the first way into the music – and they have a knowledge base of solos they can play backwards and forwards,” said Thomas.
“It’s a stored warehouse of information,” he said. “What creative jazz players do over time is synthesize this database of everything they know from all of their heroes – Charlie Parker and John Coltrane and so on – and they’re playing something that comes from a tradition but synthesizes all these different expressions. There’s a process there that I think we could actually model. At what point does this thing stop sounding like Miles Davis and start producing something that sounds new?“It’s a stored warehouse of information,” he said. “What creative jazz players do over time is synthesize this database of everything they know from all of their heroes – Charlie Parker and John Coltrane and so on – and they’re playing something that comes from a tradition but synthesizes all these different expressions. There’s a process there that I think we could actually model. At what point does this thing stop sounding like Miles Davis and start producing something that sounds new?
“When we hear that something has emotion, there may be a sort of ineffable quality that a musician has on the stage. I think the more interesting question is what the gap looks like between a machine being creative and a human being creative.”“When we hear that something has emotion, there may be a sort of ineffable quality that a musician has on the stage. I think the more interesting question is what the gap looks like between a machine being creative and a human being creative.”