It could also be because of your Unity Rendering options, e.g. Single Pass Stereo, if I recall properly, could cause these issues. For what is worth, I would also check if you have more than one camera rendering to your VR headset.
The camera rendering was definitely the case! I forgot I had a second camera in environment that was being rendered. It has been fixed now \:)
@user-ca5273, moving this to the vr-ar channel.
In that case, things are a little bit more complicated. Gaze data in the gaze_positions.csv
doesn't have knowledge about which objects in the VR space were gazed at. Presumably you know the 3D coordinates of the molecules being presented in your VR space?
@nmt Help me understand, what do the gaze_point_3d_x, gaze_point_3d_y, gaze_point_3d_z and the norm_pos_x, and norm_pos_y refer to? what data is contained in those columns?
Are you saying we can't use this data for what we want to do?
You can see descriptions for those variables here: https://docs.pupil-labs.com/core/software/pupil-player/#gaze-positions-csv If you want to determine which objects were gazed at, and subsequently compute the distance between those objects, you might need to consider raycasting. Further details about that in this message: https://discord.com/channels/285728493612957698/285728635267186688/1193859535157338152
Hi @nmt I need a bit more clarification to understand what you're saying. It seems that these columns, gaze_point_3d_x, gaze_point_3d_y, gaze_point_3d_z and the norm_pos_x, and norm_pos_y, in the fixation.csv don't track gaze for the user. Is that what you're saying?
I don't necessarily need to know what object was at a coordinate in the VR world, just to know that at this point the user's gaze was mapped to this x,y coordinate and then it shifted to this x,y coordinate. Without mapping that to an object that's at that coordinate.
Our aim is not to calculate the distance between the objects but the gaze of the user. If that makes sense. We just need to know which coordinates capture the user's gaze.
@nmt sending a quick follow up on the. previous message
I need a bit more clarification to understand what you're saying. It seems that these columns, gaze_point_3d_x, gaze_point_3d_y, gaze_point_3d_z and the norm_pos_x, and norm_pos_y, in the fixation.csv don't track gaze for the user. Is that what you're saying?
I don't necessarily need to know what object was at a coordinate in the VR world, just to know that at this point the user's gaze was mapped to this x,y coordinate and then it shifted to this x,y coordinate. Without mapping that to an object that's at that coordinate.
Our aim is not to calculate the distance between the objects but the gaze of the user. If that makes sense. We just need to know which coordinates capture the user's gaze.
Those variables do track the user's gaze, @user-ca5273. I have a follow-up question, are you sure you want to only compute 2D distance? Aren't your molecules presented at 3D locations in the VR space?
@nmt the molecules are in 3D space, but unfortunately we don't have access to the Unity code to do raycasting as you suggested.
This is the challenge as far as I see it:
norm_pos_*
are 2D coordinates, relative to the world camera image. So, if you did compute the distance between two fixation locations, you're essentially working in two dimensions, and the distance computed is really going to depend on the viewing perspective of the observer. If everything is static and presented on one plane, e.g perpendicular to the viewer, this might not be an issue. But from what I understand, you cannot make this assumption.
Alternatively, gaze_point_3d_*
does technically give a 3D gaze point (x,y,z; unit: mm). But in practice, the depth component (z) can be noisy/inaccurate. This is because computing viewing distance based on ocular vergence is inherently difficult. See this message for reference: https://discord.com/channels/285728493612957698/285728493612957698/1098535377029038090
This is why I recommended finding the intersection of gaze with your objects in VR space that have a known 3D position. That way, it would be somewhat straightforward to compute an accurate distance between the fixation locations.
If you don't have access to the positions of the VR objects, have you considered working in degrees of visual angle instead? This would be straightforward to compute as the necessary data are already contained within the exports.
Hi! I was wondering that from the pupil_position.csv
, is the diameter corresponds to actual or visual diameter?
Like, if the person looks up then would the diameter of ellipse rather than a circle?
Hi @user-c32d43 👋 ! that file would offer multiple columns and generally includes both values apparent pupil size and a physiological approximation.
Data made available by 2d pupil detection:
ellipse_center_x
- x center of the pupil in image pixels
ellipse_center_y
- y center of the pupil in image pixels
ellipse_axis_a
- first axis of the pupil ellipse in pixels
ellipse_axis_b
- second axis of the pupil ellipse in pixels
ellipse_angle
- angle of the ellipse in degrees
If you ran the 3D pupil detection, you will have field named:
diameter_3d
will give you the diameter of the pupil scaled to mm based on anthropomorphic avg eye ball diameter and corrected for perspective.
Have a look here for a more detailed information about each field.
Additionally, if looking for pupillometry using the VR-Addons or Pupil Core, I will recommend reading our Best Practices
Could you explain a bit more about the visual angle approach. Or share some documentation on how that can be done, and which files that are exported. Thanks
Sure. Conceptually, you'd want to look at the periods between classified fixations, as these contain information about the transitions of gaze between fixation locations. In the fixations export, you can see the start and end time of each fixation, so it's somewhat straightforward to identify the inter-fixation interval. From there, you can check out the gaze data during those intervals.
To compute gaze shifts in degrees of visual angle, you can work with gaze_point_3d_*
. It should be converted to spherical coordinates, and the relative change between each timepoint defines the angular gaze transition.
If you're comfortable running some Python code, all of the relevant functions are in this tutorial we wrote
@nmt , I've looked at the documentation on https://docs.pupil-labs.com/core/terminology/#offline-fixation-detection. I just want to confirm if the disperstion data is alson in mm or pixels
Hi @user-ca5273 ! Fixation dispersion is reported in degrees, you can check the implementation details here
Just a quick note about the dispersion reported for each fixation. The fixation detector is a dispersion-based filter. So it considers how many gaze points were allocated in a given area (defined by a max dispersion threshold), and also some temporal thresholds. This is not the dispersion in degrees between separate fixations, just to avoid any ambiguity!
Thanks for the clarification. This was very helpful