OUTSHIFT
  • Home
  • Blog

visual neuroscience

14/3/2019

0 Comments

 
Picture
sMarch 2019
I met with Dr Keisuke Suzuki and Dr David Schwarzman, Postdoctoral Research Fellows at the Sackler Centre for Consciousness Science at Sussex University. They were involved in a VR experiment where a 360 film of the environment is presented to the viewer interspersed with a live 360 camera feed, as a way of testing what changes people notice, how accurately and the size of the changes they can detect.

Demoing the Quizzer to them we discussed how it is easier to match a video with a live scene viewed through a camera than to match it with the real world, as the lens looks at the world differently from the eye. (How differently, is one of the things I have been trying to find out.)

David suggested I think about using a cross to help users line up the video and the real world, something the optomotrists I have spoken to told me also. In fact, I usually tell the viewer what aspect of the scene to line up, and it always contains horizontal and vertical lines- in the Fusebox corridor, it is the square window in the door. Perhaps this aspect could first appear accented without the background scene using markerless AR to help the user find the right viewing position.  Or maybe it's better to simply highlight the physical world - I could do this is a subtle way using paste-up graffiti.
0 Comments

Your comment will be posted after it is approved.


Leave a Reply.

    Archives

    April 2021
    April 2020
    July 2019
    June 2019
    May 2019
    April 2019
    March 2019
    February 2019
    December 2018
    November 2018
    October 2018
    October 2017
    February 2017
    January 2017
    September 2016
    February 2016
    January 2016
    December 2015
    November 2015
    July 2015
    June 2015
    February 2015
    December 2014
    November 2014
    September 2014
    February 2014
    December 2011
    November 2011
    October 2011
    September 2011

Home

About

Menu

Contact

  • Home
  • Blog