@nmt How to use Monkeylogic (Matlab) with pupil lab core which estimates the duration of images participant has to look for ?
Hi @user-e83888! You might want to check out these MATLAB helper scripts for Pupil Core: https://github.com/pupil-labs/pupil-helpers/tree/master/matlab. They should come in useful when building your experiment.
Hello! I am working with the Pupil Core. Originally, I was interacting with it via Python using Pupil Labs' pyuvc fork. I've decided to now use C++, as the speed is a requirement for my use case. I've since installed Pupil Labs' fork of libuvc for C++. I am able to list all the devices and see that the Pupil Core is connected. However, if I attempt to open a connection to it, I always get an error noting that the device is busy. Any thoughts on a fix for this? I've already tried rebooting and reconnecting the device.
Hi @user-ffc425 , while I don't have experience implementing this in C++, may I ask which aspect of pyuvc was in particular too slow?
Have you also tried replicating how the internal C code of pyuvc initiates the connection via libuvc?
pyuvc was itself not a problem, more so the entire application, of which the pupil controller is a part, I am developing needed to be moved over from Python due to how it handles multiprocessing and the load on the system (minimize space and maximize speed). Thanks for that link! I'll give it a read through and see what happens
I see! Out of curiosity, may I ask what you are developing?
Otherwise, best of luck and just let us know if you have other questions.
Sure! This would actually be somewhat helpful in potentially resolving the issues that lead to this. Ideally, I am developing a set of sensors that are operating in parallel on a raspberry pi. This includes 2 cameras. One from the officially supported list of picamera modules, and the other the pupil core. I was able to get this working and attempt to sync these after the fact with a constant offset in Python, but I noticed that at certain points the alignment would not be constant. These seemed to be random in time (yet constant in new offset) and potentially but uncofirmed caused by the picamera binding to C++ libcamera, hence the switch (as per recommended by the developer of that library). My lab was also speculating it could be related to buffers building at different speeds between the cameras. Any thoughts on that?
At least when used with Pupil Capture, the Pupil Core cameras are free-running & their sampling rates may not be in sync, so you might need to account for that (a description of how Pupil Capture does it can be found at that link). Just to mention for completeness, the speed at which frames are pulled & processed is also dependent on the power of your CPU, as well as free resources and the framerate/camera settings that you have chosen.
While I'm unable to provide deeper details on integrations with Raspberry Pi, other members of the community have done work on this (e.g., some have used this repo: pupil-video-backed) and can potentially provide help.
If you don't strictly need to use a Raspberry Pi, then you can also synchronize with other sensors via our Lab Streaming Layer integration or the Network API.
Awesome! Thank you so much for this help
hI all i am new to developmenet and having a query regarding open source software provided by pupils labs i want to utilise your code for doing some modification and the sequential flow of the code along with the in detail documentation i want to read does any help from community can be provided my main concern is to find the section of the code from which it is projecting the coordinates of eye gaze , that section i want to understand because there are 40+ files so how to dig the code
Hey lucifer,
what do you mean with coordinates of eye gaze? Like you mean where the end result (the one which is published via UDP) is generated? If so I wouldn't wonder if it is generated in or around ./pupil_src/shared_modules/pupil_detector_plugins/
. AFAIK the library that actually detects everything is pye3d, also created by pupil: https://github.com/pupil-labs/pye3d-detector
Hey everyone, sorry again, but who is currently maintaining pye3d? I would like to use this in my FOSS project but I'm concerned about the licensing situation. Currently it just says "All Right reversed". Am I even allowed to use this until "pupil decided not to"? The related issue: https://github.com/pupil-labs/pye3d-detector/issues/70. Some advice would be much appreciated.
Hi @user-84387e! I'll look into this and get back to you!
Hello guys,
I am currently working on a uni project where I have to look how we can use gaze data to decide wether or not a user finds a word difficult. For this I need to be able to map the current gaze point to the word the user is currently looking at. My current setup is 4 april tags around the laptop and then 4 displayed digitally. This works and gives me a pretty stable surface. My approach right now is to use the 'fixations on surfaces' topic for the current point the user is looking at. I then simply map the normalized points to the viewport width and height, since the surface aligns with the displayed page. As you can see from the video I still struggle to get an accurate reading. I have already made the text size rather large and spaced out which is not so pleasant to look at π .
So I wanted to ask if anyone has some experience / pointers on how I could get more accurate real time readings ( maybe even being able to discern between words of the size here in discord).
Hey @user-eb9361! Thanks for sharing the screen capture! Are you getting sufficient accuracy from Pupil Core irrespective of the surface tracking? That would be the first thing to identify, and if not, to improve upon.
Hi @nmt This is currently the accuracy I am working with. I have tried to set my monitor to a neutral height and with a steady head at a fixed distance. I am using the default calibration method and tried a couple different things but am always at around this accuracy. One interesting thing i did find is that my monitor brightness was too high and that interefered with it. Otherwise I do not really know where to go. I have tried the CursorControl plugin, but it has the same accuracy
Thanks for sharing the video! It's helpful. Subjectively, it looks good, but you can objectively test whether it's adequate for your experiment. I'd start by validating the calibration in Pupil Capture. This provides an accuracy figure in degrees. For reference, check out this message: https://discord.com/channels/285728493612957698/285728493612957698/1139566007275507812. What sort of accuracy do you achieve? Next, consider the size of your stimuli on the screen in terms of visual angle. Is the reported gaze accuracy sufficient to confidently determine whether your participant's gaze was on or off the stimuli? If not, you might need to enlarge your stimuli or achieve a more accurate calibration. Once you solve this aspect, the surface tracking should not introduce additional errors, if that makes sense.
@nmt , can you please tell me how to send only 4 channels (eye1_x, eye1_y, eye2_x, eye2_y) to monkeylogic , right now monkeylogic is receiving 22 channels through lsl.
We only need these channels to do proper communication between monkeylogic and pupil lab's core.
How can i do that ?
@nmt how many channels pupil labs core eyetracker send through TCP/IP ?
If memory serves, the number of streams can depend on which real-time plugins are loaded in Pupil Capture. In anycase, you can read more about the data stream in this section of the docs: https://docs.pupil-labs.com/core/developer/network-api/#ipc-backbone
Hi @user-e83888! Is there a specific reason you want to send only 4 channels? The Pupil Core LSL plugin publishes a flattened version of the original Pupil gaze data stream. This data stream is designed to be filtered as needed by the host system processing the incoming data. However, if you truly want to modify the number of channels published by the LSL plugin, it's possible, but you'll need to make some adaptations to the plugin's source code. You can find the source code here: https://github.com/labstreaminglayer/App-PupilLabs/tree/master/pupil_capture/pupil_capture_lsl_relay
so during the time of caliberating on monkeylogic , red tracer is not moving from position [0,0] (center) using lsl and tcp/ip both. Do you know what might be the reason ?
@nmt How to do make adaptations to the plugin source code ? I had tried but it didn't worked till now. Do you have any code snippet ?
@user-e83888, I took a quick look at their documentation. On the face of it, it should be possible to get things running, as the signal sources must be X & Y gaze points, and we can certainly provide that in the datastream. But with that said, is the monkey logic software designed for screen-based only experiments/do you need gaze in screen-based coordinates?
Yes , to get things running we need screen based coordinates but the thing is that the gaze is stuck at (0,0) coordinates and no movement is shown while running actual experiment.
It shows the same behavior with TCP/IP as well , Gaze is stuck at (0,0) coordinate
I guess this illustrate my problem a bit better. I know that zmq is really fast so it cant be that. Is it that the blink detection itself has some calculation time that is rhoughly 100ms before it can send it out? The recording shows the blink spikes, but in the terminal you can see the events arriving with a delay that is constant
AFAIK this hasn't been measured. Although 100 ms on the face of it does sound quite high. Firstly, are you sure the temporal offset measurement was performed correctly, i.e. from the sync script I shared previously?
I got this error when run "python main.py" and I still don't know what's the problem
Hi @user-2618c1 , may I ask if you are running this within a virtual environment and what Python version you are using?
Hi I am working on getting a gaze contigent experiment running in matlab using psychtoolbox. I am currently able to access the gaze on surface norm_posn on matlab in realtime by using network API and subscribing to sub.setsockopt_string(zmq.SUBSCRIBE, "surface"). I do this by calling python code from matlab python client (the python script is attached) to access the gaze positions in matlab using : gaze_positions = pyrunfile(pythonScript, 'gaze_positions'); I assumed that once calibration is done the values in the norm_posn of gaze on surface is normalised values of gaze going from 0 to 1. And just multiplying it with screen resolution would map it to screen. But I seem to get negative values and values greater than 1. I am attaching some examples of data obtained as I look at centre of screen and right or left middle corner of the screen. In nutshell, could someone help me to figure out the transformations I have to do on this gaze on surface: norm_posn values to get them transformed to pixels on the screen, so that I can check if the eye is inside a particular pixel range where I have shown a box or target.
Hi @user-3bcb3f , your Python script looks correct and your understanding of norm_pos
gaze on a Surface is correct.
Before doing some debugging, is it possible to share an image of your Surface definition and AprilTag setup, as displayed in Pupil Capture? You can share this privately via DM if you prefer.
Each surface message also has an additional field of "on_surf" : boolean. just filter by that and you only get the messages that are from 0 to 1
Hello, I am trying to get the head position in real time with the neon. As I am working with ros2, I will use the apriltag package for the computation of the distance from the head and a QR code . We are having problem to get the video in realtime with ros and python (not a probleme with the pupil real time librairy) Looking around I found that address for the rtsp video stream : rtsp://192.168.1.114:8086/?camera=world Is it possible to connect to this simply or from what I read some websocket need to be implemented to access it
Hi @user-91a92d , since you say there is not a problem with the Python Real-time package, then may I ask if you can share the exact error in more detail? It might be easier if we can get you up and running that way.
Otherwise, yes, you can use the RTSP stream directly (either over UDP or WebSocket; the WebSocket is not a necessity), you will also need to process the NAL unit format and parse the resulting scene camera packets with a video codec. While that is certainly doable, we might be able to help you save some time in that regard.
Yes we tried that and mixed it with python but sending the image by copying the data in a ros msg lead to delay. Using image transport shall help but not accessible in our current setup
You mean you mixed the C++ code with Python code?
If you explain a bit the overall goal with ROS and Neon, then I can provide better advice, as I'm not fully certain why copying the data into a ROS message would be necessary here.
Hi Everyone, I am streaming data from my nenon glasses to my macbook for quite some while. My code gets stuck while running the following command: rame, gaze = device.receive_matched_scene_video_frame_and_gaze()
The device is connected and both my laptop and the glasses are on the same wifi. Can someone please help me figure this out? I have never once faced this issue before.
Hi @user-dc63f4! Are you using the same wifi network that the setup previously worked on? Has anything about the network changed?
Hi everyone, is there anyway to change the gaze offset for the neon glasses using pupil labs API?
Hi @user-376ddb! Not using the real-time API. If you want to see this feature, it can be upvoted here: https://discord.com/channels/285728493612957698/1247098825403793468 Post-hoc offset correction is possible, though, in Pupil Cloud and Neon Player!
Hello, we're using the Pupillabs Core product in relation to a VR project to check 3D gaze depth. Our target distance is up to 2000 mm, so weβve increased the hardcoded depth value from 500 to 2000. After doing so, we performed calibration at a somewhat farther distance and then checked the resulting depth.
Although we donβt expect perfectly accurate values, we do want to see a reasonable approximation. Do you have any advice or suggestions? For example, similar to how we extended the depth value?
Currently, weβve tried calibrating at an initial depth of 2000 on a typical monitor size. We also attempted calibration with a larger monitor at a longer distance, but as the monitor got bigger, the measured depth value tended to decrease.
Thank you.
Hi @user-fce73e. Just so I understand your aims, are you attempting to calculate viewing depth based on the measured gaze data?
Hi, I was trying to use Neon with Pupil Capture, but it kept saying "World: The selected camera is already in use or blocked". Do you know how to resolve this issue? Also, I then tried to run it from source, but I got the error below. Thank you!
Traceback (most recent call last): File "pupil_src/main.py", line 36, in <module> from version_utils import get_version File "<fstring>", line 1 (get_tag_commit()=) ^ SyntaxError: invalid syntax
Hi @user-851979! Using Neon with Pupil Capture is somewhat experimental, and will only work on Linux and Mac. You would need to run from source using the neon-support
branch: https://github.com/pupil-labs/pupil/tree/neon-support
I see. Thank you for the help. Iβm trying to get real-time stream gaze position on a surface defined by surface markers. How can I do that with Neon on Windows?
This is possible with Neon using the real-time screen-gaze package. You can read all about it here: https://github.com/pupil-labs/real-time-screen-gaze
Thank you so much Neil! I have two more questions 1) What's the definition of 2D coordinates of the four corners of the marker? Are they pixel position on the screen? Why if I don't want the screen to be blocked and put my markers right outside of the four corners of a screen, e.g., bottom right of a marker sticks to the upper left corner of the screen?, and 2) I think if I want to have multiple surfaces, I can just do gaze_mapper.add_surface for multiple times. Is this the correct way to do it? Thank you for your help!
Hello! I am using Pupil Labs libuvc fork to develop C++ code to interact with the pupil core camera. Previously, I used Python to do this. I have been able to successfully connect to the camera and capture images. However, I have run into a slight point of confusion. In C++, it seems I can only access frames as MJPEG compressed images, no other format seems to work as I get an error about it being an invalid mode. My workflow requires the amount of bytes for each image to be equal and therefore uncompressed (or compressed to a constant number of bytes). In Python, this was very easily done by simply accessing the .gray property of the MJPEG object. I was wondering if anyone knew a way to replicate this functionality in C++ while retaining the high FPS?
Hi, @user-ffc425 - after you decode the MJPEG frames, the resulting data will be a constant size for each frame, right?
This is what I was thinking, but was wondering the best way to decode. I checked the source code for the Pyuvc fork, and saw you were using libjpeg-turbo and converting on-the-fly when .gray() is called. Is this the recommended workflow? As I was unfamiliar with this library, I was going to use cv2 first to gauge performance and then switch to libjpeg-turbo if it is a problem. I was hesitant to do this on-the-fly conversion due to worries about frame drops and so on
I'm sure OpenCV's decoder will be performant enough for you, but regardless of what decoder you use, this is going to be the workflow you want. Switching to a different, uncompressed format would actually be slower, as it requires sending considerably more data over USB
Ah okay, thank you so much!
Hi, @user-851979 - I'll jump in here for @nmt if I may π
What's the definition of 2D coordinates of the four corners of the marker? Are they pixel position on the screen? The units/values of the 2d coordinates should be in whatever coordinate space you want your gaze values in. Most people using this package will probably want pixel coordinates, so like you suspect, these would pixel positions.
if I don't want the screen to be blocked and put my markers right outside of the four corners of a screen You can do this, but you'll need to measure/calculate the coordinates in the extended pixel space. E.g., an AprilTag marker to the left of your monitor would have negative X values.
if I want to have multiple surfaces, I can just do gaze_mapper.add_surface Indeed! When you add a surface, an ID will be generated for that surface. When you receive mapped gaze coordinates, they will be associated with a surface ID.
Hi, I just got the surface markers working, but it's returning weird gaze position, that is, when I look at the center of the screen, it returns (3.069247245788574, 5.381711959838867). So I'm wondering if there's anyway I can check whether my surface is defined correctly? Thank you.
Thank you!
Hallo, ich benΓΆtige bitte ein Kontakt zu einem Mitarbeiter bezΓΌglich einer Bestellung.
Hi @user-28b965 , if you have questions related to an order, then it will be most efficient to send an email to info@pupil-labs.com (with the original Order ID, if you already placed an order)
Thank you!
I've solved the issue!
@user-f43a29 @nmt @user-cdcab0 When i am saving lsl data it is saving in normalized x & y coordinates but i want to get the raw data ... how to extract that ?
Hi @user-e83888 , could I ask for some additional clarification on which raw data exactly?
The normalized x & y coordinates are typically considered the raw gaze data, expressed in Pupil Core's world camera coordinate system, where the world image resolution ultimately depends on the settings used in Pupil Capture.
https://github.com/pupil-labs/pupil-helpers/blob/master/LabStreamingLayer/lsl-recording.csv
so basically normalized x & y coordinates here are ranging from 0-1 or maybe -1 to 1 and i have to store data in degrees to do post analysis. Are these normalized X&Y coordinates data saved in units ?
The norm_x/y
coordinates range from 0 to 1. They are relative values & unitless.
To be sure I provide the best answer, may I inquire about the ultimate goal of storing the data in degrees? I ask because the gaze data would still be specified in the world camera coordinate system, which is dependent on the camera's pose.
In the meantime, you may find this tutorial useful.
Since Real time gaze communication is not possible between pupil lab core eyetracker with monkeylogic , we decided to do post analysis if the participant is seeing on the image or not by checking the coordinates lies on our Region of interest or not.
Since this tutorial uses 3 coordinates and we are using only 2 , is there any function to just extract 2 coordinates x & y ... i am kindof new to analysis so having some problems in that ...
It would be a great help π
@user-e83888 Since you want to map gaze onto a screen, then I think you might have an easier time with a slightly different approach:
stimulus_1.start
and stimulus_1.end
, which could potentially be sent by your experimental setup using the Network API.Otherwise, even with the raw gaze data in spherical coordinates (i.e., gaze angle), you will still have to solve the problem of how to map it to screen coordinates, which the Surface Tracker already does for you.
If you still want to convert the gaze ray to spherical coordinates, then the code in that linked tutorial is still relevant. Gaze angle is a 3D entity, so you will need a Z direction, too. Since Pupil Capture and that LSL tool you linked already provide gaze_point_3d_x/y/z
, then one can use that.
can you please explain a little bit more about post hoc sync to the LSL stream , how can i do that ? Do i have to send the Annotations Programmatically or it can be done using GUI ?
I will definetely going to give it a try ...
Hi @user-e83888 , post-hoc sync is done by finding the same Annotations in the different streams. Provided you have time in the same units for each stream, you then align the Annotations.
The details of how to use Annotations are contained in the links in the previous message: https://discord.com/channels/285728493612957698/446977689690177536/1333456557840797728
When you say "can it be done using a GUI", do you mean that you want to manually click every time an event of interest occurred, after the experiment has finished? Or, do you want to do it during the experiment, while collecting your data?
not evrytime but just the start of the event markers and then end of the event markers.It's better to do while collecting at the time of experiment but the data is saved in .mat file (monkeylogic).
Will i get the different columns for different Annotations inside CSV or mat file ?
Hi @user-e83888 , I am not exactly sure what the "event markers" are, but since you are using LSL to sync with MonkeyLogic, then I assume you need precise time synchronization. Therefore, if you want to do it during the experiment, then you would probably want to do it programmatically.
The Annotations will be saved in Pupil Capture's recording
directory. As mentioned here (https://discord.com/channels/285728493612957698/446977689690177536/1333456557840797728), you want to activate the Annotations plugin and run a Pupil Capture recording in parallel.
We are using UP arrow key to keep displaying an image. If they will keep pressing UP arrow key with maximum reaction time of 500ms in a trial then they will see the image till they are pressing this button else the image will turn off and after some ms next trial will start.
How this data is gonna been recorded inside csv file of lsl after using just Surface Tracker Plugin and Pupil Capture recording in parallel ?
Did i get the mapped Gaze coordinates just by doing these 2 steps instead of adding annotations ?
I am already getting event markers associated with timestamps inside monkeylogic by connecting our lsl plugin to their GUI.
The Annotations and surface mapped gaze data are saved in Pupil Capture's recording
directory. You can use Pupil Player to investigate and export that data.
If you do not use Annotations, then you would still get surface mapped gaze data, but without Annotations, you would no longer have a reliable way to do post-hoc time sync with the LSL and MonkeyLogic output.
@user-f43a29 @user-cdcab0 @nmt why this error is coming ?
world - [ERROR] pupil_capture_lsl_relay: Error extracting gaze sample: 0 world - [ERROR] pupil_capture_lsl_relay: Error extracting gaze sample: 0 world - [ERROR] pupil_capture_lsl_relay: Error extracting gaze sample: 0 world - [ERROR] pupil_capture_lsl_relay: Error extracting gaze sample: 0 world - [ERROR] pupil_capture_lsl_relay: Error extracting gaze sample: 0 world - [ERROR] pupil_capture_lsl_relay: Error extracting gaze sample: 0 world - [ERROR] pupil_capture_lsl_relay: Error extracting gaze sample: 0 world - [ERROR] pupil_capture_lsl_relay: Error extracting gaze sample: 0 world - [ERROR] pupil_capture_lsl_relay: Error extracting gaze sample: 0 world - [ERROR] pupil_capture_lsl_relay: Error extracting gaze sample: 0 world - [ERROR] pupil_capture_lsl_relay: Error extracting gaze sample: 0 world - [ERROR] pupil_capture_lsl_relay: Error extracting gaze sample: 0 world - [ERROR] pupil_capture_lsl_relay: Error extracting gaze sample: 0 world - [ERROR] pupil_capture_lsl_relay: Error extracting gaze sample: 0 world - [ERROR] pupil_capture_lsl_relay: Error extracting gaze sample: 0 world - [ERROR] pupil_capture_lsl_relay: Error extracting gaze sample: 0 world - [ERROR] pupil_capture_lsl_relay: Error extracting gaze sample: 0 world - [ERROR] pupil_capture_lsl_relay: Error extracting gaze sample: 0 world - [ERROR] pupil_capture_lsl_relay: Error extracting gaze sample: 0 world - [ERROR] pupil_capture_lsl_relay: Error extracting gaze sample: 0 world - [ERROR] pupil_capture_lsl_relay: Error extracting gaze sample: 0 world - [ERROR] pupil_capture_lsl_relay: Error extracting gaze sample: 0
Hi @user-e83888 , have you made sure to calibrate before starting to collect gaze data?
yes now it's solved
Hey, I am using Pupil Core. I have developed a Python code that runs my experiments. I included the calibration process in the code but whenever the accuracy and precision are not sufficient I have to re-run the experiment. Is there any way to include an accuracy and precision minimum threshold in the Python code that has to be met in order to proceed to the presentation of the stimuli?
Hi @user-412dbc , you may want to try listening for the calibration.result.v2
notification, after checking for the calibration.successful
notification.
My colleague, @user-d407c1 , has previously provided a script that shows how to check for calibration.successful
, which can be adapted accordingly.
@user-f43a29 @user-cdcab0 so basically norm_x & norm_y are saving values on a cartesian plane ... i am able to track gaze using surface tracker plugin but i need to visualize the area outside of the image as well. I think if the values are stored on cartesian plane then i can easily extract the values outside image as well.
??
Hi @user-e83888 , norm_pos_x/y
are the normalized coordinates of gaze in the world camera image, so yes, on a plane as defined by the image boundaries. They are not the surface mapped gaze coordinates.
I am not exactly sure what is meant by "track outside the area of the image". Do you mean outside the edges of the world camera image? Or outside the boundaries of the surface or calibration area?
so in 1 of your tutorial it's written that (0,0) is the bottom left corner of the surface; (1,1) is the top right corner of the surface ... is that's the case in even when data is stored inside lsl recording csv as well ?
If yes then i can easily able to calculate the image size and the distance from left , right , top & bottom of the image to know where the image is placed according to these normalized coordinates π
As mentioned here (https://discord.com/channels/285728493612957698/446977689690177536/1333736555503816725), the surface mapped gaze data is not stored in the LSL output, but in Pupil Capture's recording
directory. You will then need to post-hoc synchronize that data to the data in the LSL CSV with appropriate use of Annotations.
When you say "image", do you mean the stimulus that you display on your monitor?
i mean lsl file described inside pupil helpers github repo
yes
Then, yes, you can use the surface mapped gaze data to determine when the image on the monitor was gazed/fixated. As you say, you could certianly also represent the image position in normalized coordinates.
yes i know that but i think data stored inside lsl file is also normalized according to these coordinate format... is that's the case ?
i am talking about this csv
https://github.com/pupil-labs/pupil-helpers/blob/master/LabStreamingLayer/lsl-recording.csv
let's say if i don't want to use surface tracker plugin (it's not feasible to attach airtags) in research experiments and still knows the value according these normalized coordinates then it might be a better approach ... right
Since you use MonkeyLogic, can I first confirm if you are using Pupil Core with monkeys?
nope , just with humans
Ok. Will they be sitting in a chin rest or will head motion be in some way restrained?
I can ask for chin rest but not sure if we will get or not
Ok, without AprilTags and the Surface Tracker, it is tricky, since with a head-mounted eye tracker, you need some way to account for any changes in the position of the wearer's head relative to the screen. In addition, with Pupil Core, the gaze data are by default in the coordinate system of the world camera, which can have an arbitrary rotation.
If the participants' heads were restrained in some way, such as in a chin rest, then you could potentially act as if Pupil Core were a remote eye tracker and correlate gaze coordinates to screen coordinates. It would amount to projecting the gaze ray onto the display while assuming a fixed center & rotation of the world camera, as well as a fixed position for the display.
However, without something like a chin rest, even a bit of head motion will quickly break that assumption and lead to inaccurate data. It can be tried of course, but I cannot make any guarantees about that approach.
In short, just using Pupil Core's default gaze coordinates as if they were surface coordinates will actually be less accurate in general. It is not necessarily a better approach.
To answer your earlier question, the data in that LSL file are not normalized according to the same format as the surface mapped gaze data:
To put it another way, the surface often appears in a sub-region of the world camera image, but not always. It can have arbitrary extent and position.
yes i just noticed that LSL gaze data is above 0-1 coordinates , so how small AprilTags pupil core can detect on the surface ?
Hi @user-e83888 , please see the message that I left in a thread here (https://discord.com/channels/285728493612957698/1334218558237970514/1334445425289330749). I tried to summarize the points and point to relevant resources that should help you get past the finish line.
how to extract trial start and trial end data from surface files as we have more than 20 trials ?
Our goal is to see how long the participant is looking at stimulus in a given trial .... let's say if they look for 20s in trial 6 and 25s in trial 4 ... that's what we have to analyse
Dear community, I would like to pair the Pupil Neon w/ a stereoscopic psychophysics setup using Matlab/Psychtoolbox and I seem a bit stuck on the calibration part.
Background and constraints: I'm aiming for a screen-based, head-fixated, gaze-contignent experiment investigating visual psychophysics in stereo vision. The world/scene camera won't be of any use b/c an optical setup w/ prisms and mirrors will be providing separate visual stimuli to either eye - i.e., the scene camera won't be matching the the visual input. Software setup will include Matlab/Psychtoolbox. Ideally, Pupil Neon will be used to control for stable fixation in a gaze-contigent manner. Therefore, offline/post-hoc processing is not an option.
Current state: I checked the docs, searched the chat, and hat a look at the pl-neon-matlab
examples provided, but I wasn't able to find the answer I'm looking for so far. I also read into the Python/Psychopy example (https://docs.pupil-labs.com/neon/data-collection/psychopy/). I get that usually one would simply present April tags and leave the calibration issue to surface-based gaze tracking using the scene camera. As per my constraints this won't be working out unfortunately (I believe).
Suggested solution: I only see a way forward retro-fitting Pupil Labs Neon interfacing w/ explicit calibration procedures. So, one could maybe use Pupil Labs Neon gaze data and perform an explicit calibration/registration against a known surface structure, e.g., 13-point calibration with known marker positions. From what I read, it seems like Pupil Labs Neon integration into neither Matlab/Psychtoolbox nor Psychopy would provide an explicit API call for this. So it would be some home-brew solution(?).
I would really appreciate guidance on this problem from the community and/or support! Maybe there's already a best practice for this which I did not stumble upon?
Many thanks in advance!
Hi @user-79f0b2 , interesting use case!
Since Neon is deep-learning powered and calibration-free, the software itself does not have such a calibration step. You simply put on Neon and you are eyetracking. However, I see how in your case, you need an additional step.
May I first ask if Neon will be mounted and have a fixed, static relationship with the display or prismatic setup? Then, you only need to do a one time Mount Calibration, similar to what we do for Neon XR. We can provide the details.
Also, although I don't know the exact details of your prism setup, could the Gaze Offset Correction feature be applicable in your case?
In any event, the MATLAB integration will be expanded today to provide the optical axis data, and you could also perform a calibration step with those. You would receive that data by enabling Compute eye state
in the app settings and then calling device.receive_gaze_datum()
. This custom calibration step would require some programming to implement, as you say. If you would at all be interested in potential assistance with that from us, we also offer Support packages.
how to extract trial start and trial end
Hi @user-79f0b2 , the pl-neon-matlab
integration has been updated. You can now extract the optical axis and other eyestate data directly from the datum returned by device.receive_gaze_datum()
.
Hi @user-f43a29,
thank you very much for the fast action!
First of all, it's reassuring to hear that my use case sounds interesting, and also that I didn't miss some very obvious solution in the docs or examples. Currently, I'm evaluating my options - time- and resource-wise. It's definitely a plus to have the optical axis data available, to kick-start this! In case I decide to go down this route I'll let you sure know about the outcome (and if further support will be required, of course, too).
Thank you very much for the fast assistance with my use case.
@nmt @user-f43a29 what does this world timestamp and gaze timestamp means ? Is it in millisecond or second or is it taking clock time of computer itself.
If i want to extract the data of trial by trial from these timestamps , how to do that ?
We know how long the particular trial is , let me know how to do that ?
Hi @user-e83888 , I've also responded in the thread where we had been communicating, we can continue discussion there.
Regarding extracting data for a specific trial: