Sony put four perspectives in one VR headset

At the back of a cold gray warehouse in Austin, Texas, I put on a headset and proceeded to chase three strangers in a game of tag at SXSW. I was expecting an immersive VR experience far removed from reality but instead the head mounted display split my field of view into four squares that represented different perspectives of the room. The top left corner showed me what I was looking at while adjacent blocks brought in the view of the room as seen from the other participants’ eyes.

Sony’s computer science laboratory brought the tag experience called Superception to the company’s installation at SXSW this year. According to researcher Shunichi Kasahara, who developed the multi-person experience, the ability to share perspectives in real time can augment a person’s visual capacity and also boost empathy in a VR setting. By extending the user’s vision to include other points of view, he believes his network creates a kind of “super perception.”

“The core idea behind it is to use technology to go beyond the limitations of our human perspective,” Kasahara told me. “This could be a nomadic VR application where we use it in outdoor game settings.” Engaging with other players in real time certainly made for a collaborative, far less isolating experience. But the activity brought on a similar sense of nausea that is often the downfall of movements in VR.

At first, seeing these different viewpoints in the game was confusing, even dizzying. My legs got shaky walking on the flat concrete surface, which changed constantly depending on the visual display my eyes chose to focus on. While the corner view matched my footsteps, my brain was often tricked into taking all the perspectives into account. A few minutes into the game though, I learned to relate my position with that of the others to get a better sense of the space. Being able to see where I was going in context of the way others saw my movements made for a chaotic but exciting interpersonal experience.

Kasahara built this network of interconnected headsets so he could push people to relate their physical and spatial experiences with others around them. Last year, participants of one of his experiments were able to draw a sketch of Statue of Liberty based on simultaneous inputs from each other. And this week, Superception saw a bunch of people share their first-person perspectives to chase each other in a warehouse. Both iterations of his work demonstrate how everyone views the world differently. But they’re also indicative of the power of understanding those views to build a more cohesive experience.

Click here to catch up on the latest news from SXSW 2017.

Source: Engadget - Read the full article here

Author: Daily Tech Whip

This article is part of our 'News Tiles' service. The site is currently in Beta. When it is fully operational you will be able to search through and arrange the 'Tiles' to display a keyword, product or technology over your chosen time period. For example you would be able to display all of the leading tech articles on the new Kindle Fire, in one spot in real time. You will also have access to our own original reporting and analysis as well as a polished place to post your own thoughts & reviews here, amongst the Daily Tech Whip Community. Please let us know if you have any feedback via the contact form or via Twitter. Don't forget to come back next week and see our full site and claim your name and your own free tech blog.

Share This Post On