Monday, March 17, 2008

Bringing Second Life To Life: Researchers Create Character With Reasoning Abilities of a Child

Source: Physorg.com

http://www.physorg.com/news124368610.html



"Troy, N.Y. – Today’s video games and online virtual worlds give users the freedom to create characters in the digital domain that look and seem more human than ever before. But despite having your hair, your height, and your hazel eyes, your avatar is still little more than just a pretty face.


A group of researchers from Rensselaer Polytechnic Institute is working to change that by engineering characters with the capacity to have beliefs and to reason about the beliefs of others. The characters will be able to predict and manipulate the behavior of even human players, with whom they will directly interact in the real, physical world, according to the team.

At a recent conference on artificial intelligence, the researchers unveiled the “embodiment” of their success to date: “Eddie,” a 4-year-old child in Second Life who can reason about his own beliefs to draw conclusions in a manner that matches human children his age.

“Current avatars in massively multiplayer online worlds — such as Second Life — are directly tethered to a user’s keystrokes and only give the illusion of mentality,” said Selmer Bringsjord, head of Rensselaer’s Cognitive Science Department and leader of the research project. “Truly convincing autonomous synthetic characters must possess memories; believe things, want things, remember things.”

Such characters can only be engineered by coupling logic-based artificial intelligence and computational cognitive modeling techniques with the processing power of a supercomputer, according to Bringsjord.

The principles and techniques that humans deploy in order to understand, predict, and manipulate the behavior of other humans is collectively referred to as a “theory of mind.” Bringsjord’s research group is now starting to engineer part of that theory, which would allow artificial agents to understand, predict, and manipulate the behavior of other agents, in order to be genuine stand-ins for human beings or autonomous intellects in their own right. "...


and

"The researchers recreated the same situation in Second Life, using an automated theorem prover coupled with procedures for converting conversational English in Second Life into formal logic, the native language of the prover.

When the code is executed, the software simulates keystrokes in Second Life. This enables control of “Eddie,” who demonstrates an incorrect prediction of where Person A will look for the teddy bear — a response consistent with that of a 4-year old child. But, in an instant, Eddie’s mind can be improved, and if the test is run again, he makes the correct prediction.

A video clip of the “False Belief in Second Life” demonstration is available online at: http://www.cogsci.rpi.edu/research/rair/asc_rca.

“Our aim is not to construct a computational theory that explains and predicts actual human behavior, but rather to build artificial agents made more interesting and useful by their ability to ascribe mental states to other agents, reason about such states, and have — as avatars — states that are correlates to those experienced by humans,” Bringsjord said. “Applications include entertainment and gaming, but also education and homeland defense.”

This research is supported by IBM and other outside sponsors, and the team hopes to engineer a version of the Star Trek holodeck — a virtual reality system used onboard the starships that allowed users to interact with the projected holograms of other individuals. Such a system could allow cognitively robust synthetic characters to interact directly with human beings, according to Bringsjord.

The proposed research would require the use of two of Rensselaer’s state-of-the-art research facilities — the Computational Center for Nanotechnology Innovations (CCNI) and the Experimental Media and Performing Arts Center (EMPAC).

The most powerful university-based supercomputing system in the world, the CCNI is made up of massively parallel Blue Gene supercomputers, POWER-based Linux clusters, and AMD Opteron processor-based clusters, providing more than 100 teraflops of computing power. "



In other news from Physorg.com publisjhed today from Upstate New York: Birthplace or IBM, Xerox and Kodak.

Researchers discover second depth-perception method in brain



"It's common knowledge that humans and other animals are able to visually judge depth because we have two eyes and the brain compares the images from each. But we can also judge depth with only one eye, and scientists have been searching for how the brain accomplishes that feat.



Now, a team led by a scientist at the University of Rochester believes it has discovered the answer in a small part of the brain that processes both the image from a single eye and also with the motion of our bodies.

The team of researchers, led by Greg DeAngelis, professor in the Department of Brain and Cognitive Sciences at the University of Rochester, has published the findings in the March 20 online issue of the journal Nature.

“It looks as though in this area of the brain, the neurons are combining visual cues and non-visual cues to come up with a unique way to determine depth,” says DeAngelis.

DeAngelis says that means the brain uses a whole array of methods to gauge depth. In addition to two-eyed “binocular disparity,” the brain has neurons that specifically measure our motion, perspective, and how objects pass in front of or behind each other to create an approximation of the three-dimensional world in our minds.

The researchers say the findings may help instruct children who were born with misalignment of the eyes to restore more normal functions of binocular vision in the brain. The discovery could also help construct more compelling virtual reality environments someday, says DeAngelis, since we have to know exactly how our brains construct three-dimensional perception to make virtual reality as convincing as possible. "

No comments:

Post a Comment