πŸ’» software-dev


user-225593 11 May, 2024, 08:49:27

Hi, Pupil Labs Team, I am new to Pupil Labs Neon eye tracker and started recently working with it for real-time detection of gaze in terms of screen-based coordinates. I have followed the instructions on the link https://github.com/pupil-labs/real-time-screen-gaze and run the code. But while running the program, it shows some warning/errors like this "unable to create requested socket pair unable to create requested socket pair WRN: Matrix is singular. WRN: Matrix is singular." and data like this "Gaze at 0.03125879913568497, 1.3602060079574585 Gaze at 0.03569450229406357, 1.4242486953735352" followed by the warning WRN: Matrix is singular which repeats again. Could you please kindly assist in resolving this error? the purpose is to detect the gaze in screen-based coordinates and changing stimuli on the screen accordingly as a feedback to the user. I also tried gaze-controlled Cursor Demo package which also resulted in some errors related to Pyside module (PySide6.QtCore.QObject.connect). I am currenlty using Python 3.9.7 version. However, the program for scene-based gaze coordinates (https://docs.pupil-labs.com/neon/real-time-api/tutorials/) worked well and yielded gaze, pupil diameter information as well. However, my requirement is gaze information in the form of screen-based coordinates. I also wonder whether it is possible to see markers on the screen by running the above program for screen-based coordinates? Thank you in advance for your support.

nmt 11 May, 2024, 11:18:28

Hi @user-225593! Let's start with the real-time-screen-gaze output.

From what you've posted, it seems like the tool is running okay – you've actually got screen-mapped gaze coordinates, denoted by 'Gaze at xxx, xxx'.

Those Matrix is singular messages can occur when the homography computation cannot be accurately determined due to lack of sufficient or non-collinear correspondences. One can usually ignore those.

user-60ce61 11 May, 2024, 08:56:34

Hello guys,

I hope you are doing well! I am using Core eye tracking and have developed a Python experiment where I will present some visual stimuli (code attached in txt) and I wanted to get your feedback on some questions that I have.

1) With regards to the components of the code, did I miss any vital parts that I should have included? (the parts that I have included are: Time synchronization, Start recording, Start calibration, Stop calibration, Stop recording). 2) The code runs functionally both with and without the ''Time synchronization'' part of the code and I get the same datasets if I include it or not. This makes me wonder If the code that i used for Time synchronization does anything. 3) By using the existing code, even when the calibration fails the code continues running and the experiment begins. Is there any way to make the calibration repeat until it is done properly before it moves to the experimental part?

Thank you in advance, Panos

Experiment.txt

nmt 11 May, 2024, 09:00:31

Hey @user-412dbc! I've moved your message to this relevant channel.

Feedback points: 1. The time sync part of your code is currently only printing the inferred remote pupil clock time from a local clock measurement. It's not doing anything meaningful beyond that. That example you've copied from mainly demonstrates how to make that inference. Therefore, you'll need to think about which local clock you're trying to sync pupil time with, and then use that inference accordingly. 2. There's no super straightforward way to repeat the calibration depending on how successful it was. However, you might want to add some user input functionality, such as using a key press when you're satisfied that enough data points have been collected for a good calibration. 3. The code outside the main function, which performs operations such as starting and stopping recording, starting and ending calibration, and running an experiment, could be organised into separate functions. The calls to these functions would then occur within the main function. 4. The setup_pupil_remote_connection doies the same thing as the block of code at the end where you create a new zmq context and socket. This duplication could be eliminated by reusing the function. 5. Lastly, there is no error handling for network issues in your code. You might want to include try-except blocks to handle potential errors when making network requests.

Good luck!

user-293aa9 13 May, 2024, 22:09:17

Hey there! I am not sure what channel this belongs in but here goes. I want to reference some recent chat messages for credit, specifically some of the announcement posts. I found this archive at https://chat-archive.pupil-labs.com/ , but it hasn’t been updated in a while. Is that something you’re taking care of? If so it would be super cool if the recent chats could be added there. πŸ˜„ Thanks a bunch!

user-d19a4e 14 May, 2024, 06:26:32

Hi! When I tried https://docs.pupil-labs.com/alpha-lab/map-your-gaze-to-a-2d-screen/, I always got this error when running the Python script. My video has an audio recording. I'm confused as to why this error occurred.

Chat image

user-d407c1 14 May, 2024, 06:36:17

Hi @user-083b4f ! I've moved the message to this channel. It seems like there is an issue when decoding the average rate of the audio frames, audio frequency from Neon recs changed recently from 48KHz to 44,1 KHz

Are you running it from the repo or the pip package? If you installed it from source, you can try changing this line 23 to guessed_rate or hard code it's value for the different streams.

user-d407c1 14 May, 2024, 06:36:47

I would need to take a look and update the repo as soon as I can

user-d407c1 14 May, 2024, 09:57:10

Hi @user-293aa9 ! Updated! You may have to do a hard refresh to see the changes.

user-293aa9 14 May, 2024, 10:14:15

I see it now, thanks! πŸ™Œ

user-412dbc 14 May, 2024, 10:35:42

Is there any way to send a ''pause recording'' order instead of ''stop recording''? what I want to achieve is to present some visual stimuli and associated questions, and I want the eye tracker (Core) to only record when the stimuli are on screen. e.g., pupil_remote.send_string('r') print(pupil_remote.recv_string())

nmt 14 May, 2024, 11:16:34

Pausing a recording is not possible. May I ask why you would really want to pause a recording?

nmt 14 May, 2024, 11:16:45

What would be the end goal of doing so?

nmt 15 May, 2024, 08:37:43

Hi @user-225593! It sounds like that could be due to poor marker detection. When the markers are not properly detected, no gaze mapping will occur. I can recommend previewing the scene camera either in the app or in the monitor app to ensure that the markers are large enough, have good borders with white edges, and are clear/not washed out from scene camera exposure. You could even confirm that they are being robustly detected in your given environment by using the Marker Mapper enrichment in Pupil Cloud. When there are gaps in the mapping, it may appear that there are longer delays. Let us know how you get on!

user-225593 15 May, 2024, 09:36:39

Thanks a lot @nmt for giving the possible solution. Can I add more markers in the Python code as instructed on https://github.com/pupil-labs/real-time-screen-gaze?. Could you please verify this code and provide your recommendations (attached herewith)? We are interested in real-time gaze detection using Neon which should be run continously and generate gaze parameters without any delay. Thank you very much!

SPOILER_real_time_neon_multiple_markers.txt

nmt 15 May, 2024, 09:56:05

Hi @user-225593! Thanks for sharing the code. Unfortunately, it's not really possible for me to determine whether the markers will be detected by just looking at the code. Your best bet is to make a recording with the markers present, and then try to run the marker mapper enrichment in Pupil Cloud. This will give you a better idea of the success of marker detection. I hope this helps!

user-225593 15 May, 2024, 10:31:59

Thanks, Neil, for checking the code and your guidance. As our focus is real-time detection of gaze (detecting gaze on the screen and giving feedback continuously), may I confirm whether this would be possible using the marker mapper enrichment in Pupil Cloud? We found that the marker mapper enrichment works well for offline analysis of eye features following the uploading of recordings? I wonder whether this works for online analysis (during recording itself). Kindly correct me if my understanding is wrong. Thank you.

nmt 15 May, 2024, 11:24:52

Marker mapper is for post-hoc analysis only - not real-time. However, you can use it to check the markers can be robbustly detected in your testing environment. Once that's confirmed, go ahead and retry the real-time screen gaze package.

user-225593 15 May, 2024, 11:27:23

Thanks a lot, @nmt for your clarification. I will try it out and will see how this works.

user-0001be 15 May, 2024, 21:25:26

I have a question about obtaining horizontal gaze and then converting to horizontal (x) velocity. So it seems that the horizontal gazes aren't very fast compared to angular (x,y,z) velocity even though I'm looking between two points horizontally. Is this expected?

Chat image

user-f43a29 16 May, 2024, 07:13:08

Hi @user-0001be , when you say "horizontal (x) velocity" are you referring to the speed of the red gaze circle in the scene camera video feed? So, in units of "px/s" or similar?

user-e08a8b 16 May, 2024, 07:01:24

Hi @nmt, it's one in the rec_export library. The path is "src/pupil_labs/rec_export/explib/fixation_detector". In the helpers.py script, I want to use get_gaze_from_recording function where I need to feed in rec_folder to run. I tried downloading a recording from pupil-labs cloud (including the scene video) and ran it, but it resulted in empty values for gaze_distorted, gaze_reflected, and gaze_normalized. I wondered whether the way I fed in the folder directory was incorrect.

nmt 16 May, 2024, 07:09:47

Hey @user-594678! Let's move the conversation to here. Just to clarify, 'pl-rec-export' is a command-line tool. In other words, you run it from the command line with pl-rec-export "/path/to/rec", and it should export gaze, IMU, etc. as CSV files. Therefore, using specific functions from the tool in a standalone manner may not always work as expected.

user-594678 16 May, 2024, 19:07:07

Hi @nmt thanks for your response! What I want is having access to "corrected_velocity", which is the gaze velocity after corrected for the optic_flow. I'm mainly looking into three scripts, (1) [src/pupil_labs/rec_export/explib/fixation_detector.py]: the 'detect_fixation_neon' definition describes the processes of how to get the 'corrected_pixel_velocity', (2) [src/pupil_labs/rec_export/explib/fixation_detector/optic_flow_correction.py]: where it describes how to compute optic_flow_vector and get corrected pixel velocity, and (3) [src/pupil_labs/rec_export/explib/fixation_detector/helpers.py]: where it detects gaze points, normalizes or resamples them.

Here are two questions I currently have. (a) Are gaze points in 'gaze.csv' downloaded from the pupil-labs cloud preprocessed with a savgol filter and corrected for the optic flow? I am assuming not, because when I computed the gaze velocity directly from the gaze points, the velocity values seemed unreasonable, and looked better when I applied a savgol filter. (b) I tried computing gaze_distorted, gaze_rectified, and gaze_normalized by using your code (helpers.py line 96-111). For the 'rec_folder' variable, I input the directory of where the TimeSeries + Scene Data is located. However, when I ran the code, the three variables were all empty. When I ran the line 96 ("rec = Recording(rec_folder)"), it looked working okay. When I print out 'rec' it gave me <Recording id='****', name='2023-10-17_15:08:51', start_datetime_str='2023-10-17 22:08:51.818000+00:00', duration_s=218.954> (I hid the recording id for the posting purposes).

I understand that 'pl-rec-export' is supposed to be run on command and probably it would not behave as an expected way when only part of it is run. However, I think the organization of the process is pretty straightforward and especially for helpers.py script's line 96-111 should run without a problem.

Do you have any idea how I can get the corrected velocity?

user-477991 16 May, 2024, 07:36:31

Hi everyone, can someone help me please? I want to display the 'Pupil Capture - World' window in my app's GUI window, can anyone tell me if this is possible? Thanks

user-f43a29 16 May, 2024, 08:41:33

Hi @user-477991 , I have removed your other post in πŸ‘ core to keep the conversation in one channel. If you want to display a stream of the world camera in your app, then you can make use of Pupil Core's Network API. It offers a "Frame Publisher" that you will want to activate.

Check this tutorial for an example of how to use it to display the video feed in your own GUI windows.

Note that Pupil Capture will need to remain running in the background.

user-0001be 16 May, 2024, 12:26:20

Rob, I'm just taking data from the eye camera, not referring to scene camera. I am using the gaze_sample.optical_axis_left_x or right_x for horizontal and while for the angular velocity that is all (x,y,z)

If for angular velocity (deg/s) it's the below.

        current_optical_axis_left = [gaze_sample.optical_axis_left_x, gaze_sample.optical_axis_left_y, gaze_sample.optical_axis_left_z]
        current_optical_axis_right = [gaze_sample.optical_axis_right_x, gaze_sample.optical_axis_right_y, gaze_sample.optical_axis_right_z]
        current_time = time.time()
        elapsed_time = current_time - start_time
        time_interval = current_time - previous_time

        # Calculate eye velocities using helper functions
        left_eye_velocity = calculate_eye_velocity(previous_optical_axis_left, current_optical_axis_left, time_interval)
        right_eye_velocity = calculate_eye_velocity(previous_optical_axis_right, current_optical_axis_right, time_interval)

with the following functions:

def calculate_angular_difference(vector1, vector2):
    dot_product = sum(a * b for a, b in zip(vector1, vector2))
    magnitude1 = sqrt(sum(a ** 2 for a in vector1))
    magnitude2 = sqrt(sum(b ** 2 for b in vector2))
    if magnitude1 == 0 or magnitude2 == 0:
        return 0
    cos_angle = dot_product / (magnitude1 * magnitude2)
    cos_angle = min(1, max(-1, cos_angle))
    angle_radians = acos(cos_angle)
    return angle_radians * (180 / pi)

def calculate_eye_velocity(previous_vector, current_vector, time_interval):
    if time_interval <= 0:
        return 0
    angular_diff = calculate_angular_difference(previous_vector, current_vector)
    return angular_diff / time_interval

and for horizontal velocity it's just along x vector.

user-0001be 16 May, 2024, 22:57:28

@user-f43a29 In addition to that I also have another 2 questions: 1. When I move my head right, it gives a positive velocity and I expect my eye gazes to be negative velocity as they are expected to move opposite each other to compensate when looking at a fixed point in front of me. But as you can see from the figure they are in the same positive side. Why do you think that is? 2. The head velocity seems smooth compared to the eye velocities which look very spiky. Is it because of the data collection sampling rate?

user-2cc535 17 May, 2024, 06:06:17

Hello everybody; hope you’re fine; I’m a new user of pupil core and I’m a little unfamiliar with it. I’m working on film and image (specially painting). I have a question; if I design a task in psychopy, how can I integrate with Pupil Capture? Is there any experienced person here about it? Any help would be appreciated

user-07e923 17 May, 2024, 06:40:02

Hey @user-2cc535, thanks for reaching out. You can use the PsychoPy Integration for Pupil Core to do stimulus presentation and control your experiment directly within PsychoPy Builder.

We also recommend using Surface Tracking to define the screen area, so that you can map the screen onto the world video.

Please note that the PsychoPy team recommends using only version 2022.2.5 for eye tracking studies.

user-2cc535 17 May, 2024, 08:06:05

Dear @user-07e923 thanks a lot for your kind replay. In this case can I analyse my outcomes with Pupil player? There is no problem?

user-07e923 17 May, 2024, 08:11:42

I think this depends on your research goals and what you would like to get. Could you clarify what your end goal is for your experiment? Like, what do you want to understand at the end of the experiment? This helps me give you better feedback.

user-2cc535 17 May, 2024, 08:22:26

I need fixation numbers in AOIs and durations also and the most important for me is heatmaps. I mean in a painting (or eventually more than one) which zone is in the highest level of attention?

user-07e923 17 May, 2024, 09:02:04

In this case, you can analyze them directly on Pupil Player, using the Surface Tracker plugin.

user-2cc535 17 May, 2024, 09:02:31

Thank you so much

user-07e923 17 May, 2024, 09:03:44

Oh, this document also explains to you what the normalized coordinates mean on the surfaces (e.g., what is 0,0). It'll be helpful if you want to find out specific fixation locations.

user-f43a29 17 May, 2024, 12:29:27

Hi @user-594678 , pl-rec-export is to be used with Native Recording Data, not the Timeseries + Scene Data files.

nmt 17 May, 2024, 14:25:07

Hi @user-594678! You're correct that the gaze.csv download is not optic flow corrected. But as @user-f43a29 points out, pl-rec-export is meant for the Native Recording Data, which you can obtain from Cloud or directly off the phone via local transfer. Give that a go and see if the functions work for you!

user-477991 17 May, 2024, 14:37:58

Thank you very much! it really succeeded in opening a window that displays the 'world camera' Is there such an option to display the gaze position (which I see in the application window) in the new window?πŸ™

user-f43a29 17 May, 2024, 15:56:32

That's great to hear. The rest of the files in that directory contain examples of how to use the Network API.

To get gaze over the Network API, check out this part of the documentation, as well as this description of the gaze message format.

user-0001be 17 May, 2024, 14:39:42

@rob In addition to that I also have

user-594678 17 May, 2024, 18:53:34

Hey @nmt @user-f43a29 , Thanks for your response! I downloaded the Native Recording Data and now it's working πŸ™Œ

I have a follow up question regarding to how the code extracts the gaze data (the one saved in gaze.csv). I've studied the repository more and looks like the one saved in gaze.csv is directly from 'gaze ps*.raw' file and 'distorted' data. Is that correct?

I looked into the gaze_distorted data and the gaze data pulled from the csv file. They have the same length but they are not exactly the same. (see attached pictures. 'exported' is from the csv file and 'distorted' is directly read out from the recordings.)

Could you help me understand what gaze data is saved in the csv file? I'm trying to understand how the gaze data is processed and saved. Thanks for your help!

Chat image Chat image

user-f43a29 21 May, 2024, 11:55:30

Hi @user-594678 , I assume that you have plotted "gaze_distorted" from the detect_fixations_neon function ?

This is the gaze data, in scene camera coordinates, before it has been undistorted via application of the camera intrinsics. See the AlphaLabs Camera Distortion article for an explanation.

Here, your "gaze_exported" plot actually shows the same data, just "gaze_distorted" has been trimmed at the beginning. Notice that in the left image, your "gaze_distorted" values are starting from roughly frame 6000 of gaze_exported and in the right image, "gaze_distorted" is starting from roughly frame 2500 of gaze_exported.

I am not sure though why your plot of gaze_distorted is starting >12 seconds into the recording in both cases. Have you trimmed the beginning of the data before making that plot?

user-b62a38 21 May, 2024, 03:14:40

Hi Team, Is there a way to access the camera intrinsics (https://docs.pupil-labs.com/core/terminology/#camera-intrinsics) thru Unity?

user-07e923 21 May, 2024, 08:05:17

Hi @user-b62a38, the camera intrinsic estimation is stored onto the PC whenever you run a new estimation. You could technically read this file in Unity, since the file is just an msgpack-ed dictionary: see https://discord.com/channels/285728493612957698/285728493612957698/1160814272050384937

Oh, perhaps some of your fellow Core-XR users have better solutions: e.g., https://discord.com/channels/285728493612957698/285728635267186688/1083057787745075361

user-b62a38 21 May, 2024, 08:14:47

Thanks! I'll do some looking into this, appreciate the day reply

user-594678 21 May, 2024, 14:07:58

[email removed] thanks for your answer. Yes, it is "gaze_distorted" from the detect_fixation (more exactly, I snipped a part of helper function in the detect_fixation script).

Could you please help me understand (a) whether the gaze.csv file is then the coordinates of gaze after "undistort" the data, and (b) why it is supposed to trim the beginning of the gaze recording?

Also, I didn't trim the data when I plotted those two figures. Could this be the neon device behaved in an unexpected way that we might want to investigate into?

user-f43a29 21 May, 2024, 16:37:13

Hi @user-594678 , the x/y pixel coordinates in the exported gaze.csv file from pl-rec-export are the distorted data, while the gaze direction columns (azimuth/elevation) are computed from the undistorted data. Depending on what you need to do, it is fine to work with the "distorted" gaze data. For instance, the fixation detector in pl-rec-export uses the distorted points.

pl-rec-export does not trim data at the beginning, nor does it do that in the detect_fixations_neon function. Since these functions were extracted and are embedded in custom code, I cannot be completely certain why you get that result.

Based on what you have sent, I see no indication that your Neon malfunctioned, so I do not see an immediate reason to investigate the device itself.

user-5f1abb 21 May, 2024, 20:27:44

someone should really integrate your glasses to gpt

user-5f1abb 21 May, 2024, 20:28:18

i be thinking of doing such a thing for my thesis

nmt 21 May, 2024, 20:35:43

Hey @user-5f1abb πŸ‘‹. You might want to check out our Alpha Lab guide on this topic: https://docs.pupil-labs.com/alpha-lab/gpt4-eyes/

user-5f1abb 21 May, 2024, 20:39:17

Oh thats lovely

user-5f1abb 21 May, 2024, 20:40:05

Now I imagine using it for doing neuroscience research in our lab. Thanks for letting me know

user-d407c1 21 May, 2024, 20:40:22

I am planning on updating it to support gpt-4o, next week, so API calls are cheaper and faster.

user-5f1abb 21 May, 2024, 20:40:39

It is open source right?

user-d407c1 21 May, 2024, 20:41:23

the gist? sure, openAI LLM? well, that's not πŸ˜‰

user-5f1abb 21 May, 2024, 20:41:53

Yeah _open_AI it is

user-5f1abb 21 May, 2024, 20:42:49

What I was asking is this: can i contribute to the glass software of yours

user-5f1abb 21 May, 2024, 20:43:59

It includes api calls and etc. for using different models maybe open source models like LLAMA or Mistral

user-5f1abb 21 May, 2024, 20:44:11

Is it open to modification I mean

user-d407c1 21 May, 2024, 20:47:29

Sure, feel free to modify the script provided there as you wish.

That example is only meant to showcase its capabilities.

That said, I am not familiar with any open-source model with multimodal capabilities (images).

Regarding what is open source and what's not with πŸ‘“ neon , please see this message: https://discord.com/channels/285728493612957698/285728493612957698/1230425316980166707

user-5f1abb 21 May, 2024, 20:49:05

Thanks ❀️

user-594678 22 May, 2024, 06:48:32

Hey @user-f43a29 Just wanted to keep you updated what I found. I think the 'gaze_200hz.raw' file is somehow snipped the beginning of the gaze recording and looks like that's what I got when I pulled 'gaze_distorted' using the helper.py script. In the figures that I'm attaching, you can see that 'gaze raw / low sampling rate' shows the same data as 'gaze from csv' plot, but with the lower number of data, while 'gaze 500Hz raw' data is somewhat different from the other two plots not only it misses the beginning of the gaze points but also has some data points that look different from the other two (marked in the green square in the third figure). 'gaze raw / low sampling rate' is the data pulled from 'gaze ps1.raw' and 'gaze from csv' is the gaze.csv file (x, y pixel coordinates), and 'gaze 500Hz raw' is from 'gaze_200hz.raw' file.

Chat image Chat image Chat image

user-f43a29 22 May, 2024, 08:01:36

Hi @user-594678 , may I ask some clarifying questions?

  • When you say 'gaze from csv', are you referring to gaze.csv from Pupil Cloud or pl-rec-export?

  • What is gaze 500Hz? Did you resample Neon's 200Hz gaze data to 500Hz?

  • The 'gaze_200hz.raw' file contains raw data from Neon. It should not be snipped at the beginning relative to the gaze.csv exports from pl-rec-export or Pupil Cloud.

  • What is 'gaze low sampling rate'? Did you change the gaze sampling rate in the Neon Companion app when making this recording? If the gaze sampling rate was at the default value of 200Hz, then gaze_ps1.raw has data sampled at 200Hz. The gaze_200hz.raw file is a linear resampling to 200Hz, for the situations where either the gaze sampling rate was changed in app settings or the cases where maybe a data point or two is dropped during an intense recording session. If the sample rate setting in the app was at 200Hz, then gaze_ps1.raw and gaze_200hz.raw are effectively the same data, and they are also the distorted values, which should be what you find in the gaze exports.

user-e6afe3 22 May, 2024, 16:04:05

@user-f43a29 continuing our conversation from core-xr since it is no longer related- yes i am interested in finding what object the observer is looking at in the Unity scene. Do you know of a way to relate the objects presented in Unity in each frame (I'm not sure if Unity generates a data file for this) to the screen coordinates that would be provided by Pupil Capture, in the recorder feature or otherwise? Let me know if I can provide more details, or if there is an easier way to have this conversation. Thank you for your help.

user-f43a29 23 May, 2024, 17:03:39

Thanks @user-e6afe3 That is enough info. We’ve discussed it and should have a suggestion ready by end of day.

user-594678 23 May, 2024, 16:55:43

Hey @user-f43a29 - It refers to the gaze.csv from Pupil Cloud - Sorry, it's a typo. 200Hz is correct. - That's what I expected too, which doesn't seem to be the case. It would be helpful if you could double check it on your end. I wonder whether would it be possible that the resampling procedure distort the data? - I didn't change the sampling rate when made recordings - I didn't know that's even possible. The data plotted titled as 'gaze low sampling rate' is correctly pulled from the 'gaze_ps1.raw' file. However, as you can see in the figure, the number of data points is about 1/2~1/3 of the data from 'gaze_200Hz.raw' file. I remember that previously in pupil-lab's website it said that the gaze data is collected with low sampling rate on the companion device and then resampled with higher sampling rate when the recording is uploaded on the cloud (I cannot find the document and I guess it's been updated?). I thought that the upsampling is what's happening between the 'gaze_ps1.raw' and 'gaze_200hz.raw'. Otherwise, why would you have two files with the same data?

user-f43a29 23 May, 2024, 17:13:29

Hi @user-594678 which Companion phone do you have? Sure, if you upload the recording data in question to a file sharing service and send me the link [email removed] then I can try running pl-rec-export on it here.

user-594678 23 May, 2024, 17:16:28

I guess it's called One plus? Do you want me to send you both Native data and the csv files?

user-f43a29 23 May, 2024, 17:28:23

You can send all of it.

user-594678 23 May, 2024, 22:44:52

Hey @user-f43a29, you should be able to see my email in your inbox.

What I want to achieve is (a) getting the optic flow vector, and (b) compute the velocity corrected for the optic flow in angles.

I was going to use 'get_corrected_angular_velocity' function in the 'optic_flow_correction.py' script, where I realized that the script input 4 parameters to use '_get_estimated_next_points' function (line 269-271), while the function actually takes two inputs (line 224). I assume that the 'get_corrected_angular_velocity' is no longer used but 'get_corrected_pixel_velocity' is used to calculate the corrected velocity. I wonder then what would be the best way to get the corrected angular velocity. Do you think directly converting the pixel velocity to the angular velocity using camera intrinsic would work?

user-594678 29 May, 2024, 00:46:56

Hey team and @user-f43a29 could you help me understand how '_get_estimated_next_points' are computed (line 222-239 in optic_flow_correction.py)? I tried to use it while optic_flow_vector is computed using the video and hence has a smaller number of data points compared to gaze points. How does it directly subtract the optic_flow_vector from the gaze?

user-f43a29 27 May, 2024, 07:58:47

Hi @user-e6afe3 ! @user-d407c1 and I have discussed an idea, described below, but please note that this is a somewhat tricky problem:

The approach would use the ray casting functionality of Unity.

Within Unity, you can suspend four unique AprilTags (as textures on rectangles, for example) at a fixed distance in front of the camera. Then, the Surface Tracker plugin would give you normalized values in Unity's coordinate system.

If you seat the user at a fixed distance from the monitor and they will mostly sit still, then you can determine their corresponding virtual position in Unity.

The direction of the casted ray would be the vector connecting the observer's virtual position with the point reported by the Surface Tracker plugin.

You can try using the virtual position of the seated observer or the point reported by the Surface Tracker as the origin of the casted ray and see which gives best results.

Essentially, this will map the gaze ray into the 3D scene of Unity, casting it onto the closest object in that direction.

This is not something we have tested, so you will need to do a validation to be sure all is working as expected, but we feel like this might be the easiest in terms of implementation. Let us know if anything is unclear.

user-e6afe3 06 June, 2024, 18:40:26

Hi Rob and Miguel, I really appreciate you helping me out! I'm a little unclear on how to determine the user's virtual position based on the user's position relative to the monitor. Within Unity, wouldn't that just be equivalent to the Unity camera object position? Also, how do I provide Unity with gaze data from Pupil in a way that will allow objects to be "hit" by the user's gaze vector?

user-083b4f 28 May, 2024, 16:37:10

Hi team, I see the Marker Mapper and Reference Image Mapper features in the enrichment section, but both of them seem to be for offline use only. Is there any way that we can use these for real-time processing? Also one more question regarding using the real-time APIs. I know we can use them through Python but is it possible to also use them through C++?

user-d407c1 28 May, 2024, 17:18:35

Hi @user-083b4f πŸ‘‹ ! You can use Apriltags to transform the gaze position to a monitor online, check out this tutorial. Regarding C++, we provide a python wrapper due to its extended use and practicality, but the protocols used under the hood are REST HTML API, websockets and the RTSP protocol (RFC 2326) which you could probably use with the live555 lib.

user-083b4f 28 May, 2024, 18:03:51

@user-d407c1 Thanks for the info! That helps!

user-f43a29 29 May, 2024, 08:25:03

Hi @user-594678 , I was testing your data with pl-rec-export yesterday. My plan is to have a more detailed answer with respect to your data later today, but I can answer your other questions in the meantime.

First, if it does not work out in your code, then I can see about adding an optic flow corrected (angular) gaze output to pl-rec-export.

With reference first to your other message (https://discord.com/channels/285728493612957698/446977689690177536/1243333859504820255), you can still use the get_corrected_angular_velocity function (see here). You just need to update the call to _get_estimated_next_points here to use the newer function signature (i.e., just remove gaze_time_axis, fs_gaze).

How that function works is explained here. Although the optic flow vectors are computed from the scene video, they are interpolated to have the same sampling rate as gaze.

The correction (i.e., subtraction from gaze) happens here, and you can try replacing that with the get_corrected_angular_velocity routine.

user-594678 30 May, 2024, 01:13:37

Hey @user-f43a29, now the data looks good! Thank you so much for working on fixing the problem! I hate to, but need to raise another issue related to using 'get_corrected_angular_velocity', unfortunately. I wasn't able to get the velocity using the function because '_get_estimated_next_points' only takes two inputs while four inputs are fed in the line 269-271 in optic_flow_correction.py. I'm attaching my testing version of the code where you can find the error message. Please take a look and let me know how can I bypass this!

test.ipynb

user-f99fdc 30 May, 2024, 06:31:43

and another incomprehension for me : scan_id_recording in the main (equal to true or false) but after it is reserched in the list of id so I don't understand that (it can't be found it's not an id) (I'm a little bit lost, not very familiar with this kind of dataset/manipulation)?

user-07e923 30 May, 2024, 06:32:23

Hi @user-bdc05d, i've moved your message here since they seem to be related to Alpha lab scripts, and not directly to Neon.

Could you perhaps start by describing which scripts you're using, what you're trying to do with them, and what you'd like to achieve? This helps us give you better feedback.

user-ae2674 30 May, 2024, 06:33:11

and also what the difference between pressing yes or no for the scanning video presence (both display the same for me)

user-bdc05d 30 May, 2024, 07:20:42

I'm using the script and the command line provided by the page scanpath of alphalab (scanpath visualisation) and I found two mal function : the first one is that the id of the scanning video was not remove independently related if I clicked yes or no when the window of information pop up (I figured out that the id was not truly stored and I manage to store it by double clicking on the scanning video if I want it removed else I just press no if there is no scanning video or if I don't want to remove it, also there was a list names (writen in comprehension that was not ok, because it was destined to be a list I think but it was writen in comprehension for numpy arrey, so this was not returning a list as expected.

it would be great if you can eventually double check this to see if I modify thing great or no because I was new to the script and I pass a lot of time on it but I'm not 100% aware of what it does.

tell me if it not clear, and sorry to have used the wrong channel πŸ™‚

user-f43a29 30 May, 2024, 06:41:42

Hi @user-594678 , please see my message here (https://discord.com/channels/285728493612957698/446977689690177536/1245291806636114003) for instructions on that.

user-594678 30 May, 2024, 17:22:33

Hi @user-f43a29, thanks for your answer and sorry that I misread your previous message. Still I cannot get the velocity correctly, because of mismatched length of optic_flow_vector and the gaze πŸ₯² Below I'm attaching the error message I got. Because the gaze's length is way larger (32863) than the optic flow (3013), the code cannot subtract the two directly. You can check "_get_estimated_next_points" (line 232-237 in "optic_flow_correction.py"). Please let me know how I can resolve the issue. Hope this would be the last trouble shooting! Thank you for your efforts to help me in advance!


ValueError Traceback (most recent call last) Cell In[5], line 8 5 distortion_coefficients = rec.scene_camera.distortion_coefficients[0] 7 # get corrected velocity ----> 8 velocity = optic_flow_correction.get_corrected_angular_velocity( 9 optic_flow_vectors, 10 resampled_time_axis, 11 gaze_distorted, 12 gaze_normalized, 13 camera_matrix, 14 distortion_coefficients, 15 )

File ~/Documents/GitHub/pupillabs_util/pl_rec_export/src/pupil_labs/rec_export/explib/fixation_detector/optic_flow_correction.py:269, in get_corrected_angular_velocity(optic_flow_vectors, gaze_time_axis, smoothed_gaze_distorted, smoothed_gaze_normalized, camera_matrix, distortion_coefficients, fs_gaze) 244 def get_corrected_angular_velocity( 245 optic_flow_vectors: OpticFlow, 246 gaze_time_axis, (...) 251 fs_gaze=200, 252 ): 253 """ 254 Get angular velocity of the gaze point, corrected for optic flow by 255 vector subtratction. (...) ... 238 estimated_next_points = np.hstack( 239 (estimated_next_points_x[:, np.newaxis], estimated_next_points_y[:, np.newaxis]) 240 )

ValueError: operands could not be broadcast together with shapes (32836,) (3013,)

user-07e923 30 May, 2024, 07:55:46

Scanpath visualization script

user-f43a29 30 May, 2024, 17:35:03

Hi @user-594678 , did you make sure to resample the optic flow vectors to be on the same time axis as the gaze data?

Essentially, you just need to run the first half of the detect_neon_fixations function, replacing get_corrected_pixel_velocity with get_corrected_angular_velocity and skip all the remaining fixation detection steps.

user-594678 31 May, 2024, 18:02:09

Hey @user-f43a29, thanks! I was looking into a wrong fixation detection script. It works now! πŸ™‚ I sincerely appreciate for your help!

End of May archive