Hi guys, how can i get the camera instrinsics stored in the recordings ? e.g. world.intrinsics i can't open it properly
Greetings,
Korbi
cool thanks
I have another question: I want to track driver's gaze behavior in real world driving. Currently i am thinking of a single marker calibration under real conditions. I want to place the marker about ~85m away and carry out the calibration. Now i want to calculate the neccessary marker size the software needs to detect the marker. Thats why i wanted the focal length of the world intrinsics. What are your suggestions about this approach ? Is this even worth a try ?
When using the 3d pipeline, the calibration does extrapolate outside of the calibration area. Attempting to calibrate at a distance of 85 metres isn't recommended, nor is it needed. My advice would be to calibrate at the standard distance, e.g. 1β2.5 metres, and then instruct the wearer to fixate on objects in the distance such that you can validate that gaze is located on the objects. You could also use the natural features calibration, but this is more dependent on good communication between the wearer and operator. Read more about that here: https://docs.pupil-labs.com/core/software/pupil-capture/#natural-features-calibration-choreography
thanks for your answer !
any known reason why the eye video won't show but its still inputting data
Do I see it correctly that the gaze estimation is still working correctly?
yes lol
then this is likely an openGL error
i think it found my last recording data which is why its working but im about to have a different wearer
I suggest rebooting the machine.
alright, should i uninstall the device first
No (if you are referring to the device driver)
rebooted same issue, theyre showing as hidden in device manager tho
Do you see any other entries with the same name in the other categories, e.g. cameras or imaging devices?
Yeah I saw them in a USB drop down so I uninstalled them and am restarting again
I do not think this was a driver but a displaying issue to be honest.
still no cam view
But only the eyes?
correct
and it still is tracking correctly(mostly)
The terminal is not showing anything unexpected? No error messages that could be related?
i can shoot you my log file real fast
ok, nothing unusual on the first look. Could you please try restarting with default settings from the general settings menu?
i hate that this worked
yeah, I feel you. Would be better to know what caused the issue in the first place.
thanks as always though <:LULW:852663634835406920>
@papr got a quick question here, is there any way to adjust the deviated fixation position (x-axis and y-axis) afterwards (maybe using the residual method)?
You need to adjust the gaze using a manual offset (either via post-hoc calibration or custom plugin [1]) and then recalculate fixations.
[1] https://gist.github.com/papr/d3ec18dd40899353bb52b506e3cfb433
Many Thanks~
Hi, Can someone help me with the formula to convert accuracy (degree) to cm when all measured points are on the same distance? So having a accuracy of 1.1Β° on a plane with distance 1.40m results, I wan to calculate how much cm that is
Hello, I am getting the following error message:
world - [WARNING] video_capture.uvc_backend: Updating drivers, please wait...
powershell.exe -version 5 -Command "Start-Process 'C:\Program Files (x86)\Pupil-Labs\Pupil v3.5.1\Pupil Capture v3.5.1\PupilDrvInst.exe' -Wait -Verb runas -WorkingDirectory 'C:\Users\VISION~1\AppData\Local\Temp\tmpexxwja1m' -ArgumentList '--vid 1443 --pid 37426 --desc \"Pupil Cam1 ID2\" --vendor \"Pupil Labs\" --inst'"
world - [ERROR] video_capture.uvc_backend: An error was encountered during the automatic driver installation. Please consider installing them manually.
powershell.exe -version 5 -Command "Start-Process 'C:\Program Files (x86)\Pupil-Labs\Pupil v3.5.1\Pupil Capture v3.5.1\PupilDrvInst.exe' -Wait -Verb runas -WorkingDirectory 'C:\Users\VISION~1\AppData\Local\Temp\tmpfnw5lsyk' -ArgumentList '--vid 3141 --pid 25771 --desc \"Pupil Cam2 ID0\" --vendor \"Pupil Labs\" --inst'"
world - [ERROR] video_capture.uvc_backend: An error was encountered during the automatic driver installation. Please consider installing them manually.
world - [INFO] video_capture.uvc_backend: Done updating drivers!
world - [ERROR] video_capture.uvc_backend: Could not connect to device! No images will be supplied.
world - [WARNING] camera_models: No camera intrinsics available for camera (disconnected) at resolution [1280, 720]!
world - [WARNING] camera_models: Loading dummy intrinsics, which might decrease accuracy!
world - [WARNING] camera_models: Consider selecting a different resolution, or running the Camera Instrinsics Estimation!
I have followed the instructions here (https://docs.pupil-labs.com/core/software/pupil-capture/#video-source-selection), on troubleshooting drivers, with no resolution to issue. Of note, however, I do not see the libUSBK Usb Devices
orImaging Devices
categories in my Device Manager, even after showing hidden devices.
Is there a process for manual installation that is documented somewhere?
Please follow steps 1-7 from these instructions https://github.com/pupil-labs/pyuvc/blob/master/WINDOWS_USER.md
My system is detecting the camera, as they do show up in Device Manager.
Thank you, papr. I will see if this can resolve the issue.
hello. In my fixations.csv file I am getting multiple fixations for the overlapping frame intervals. For example:
id: 492
start_timestamp: 20676.97
start/end_frame_index: 7634-7639
3dx/y: 27.31673 / -22.378
id: 585
start_timestamp: 9382.697471
start/end_frame_index: 7627-7642
3dx/y: -1.753754619 / -4.730020008
I am not sure how to interpret this
The timestamps do not fit the frame indices. Could you please share this recording with data@pupil-labs.com such that we can inspect the cause?
@papr just send an email with a dropbox download link
We will come back to you via email early next week
thank you @papr
On a side note, does anyone have any opinions on the following parameters for the fixation detection algorithm: 50ms / 1500ms / 2.7deg? These values were used in Andersson et al.'s 2017 paper "One algorithm to rule them all? An evaluation and discussion of ten eye movement event-detection algorithms". Their methodology was to have 2 human coders manually classify fixations and then determine the algorithm values behind their classification. Here a few excerpts:
For fixation durations, we visually identified a small subset of fixations with durations between 8 and 54 ms, and the nearest following fixation duration was 82 ms. We therefore selected a minimum fixation duration of 55 ms to exclude this subset of very short fixations with durations outside the frequently
defined ranges (Manor & Gordon, 2003). This value is also supported by previous studies (Inhoff & Radach, 1998; see also Fig. 5.6 in Holmqvist et al., 2011).
The maximum fixation dispersion for the human coders was, after removing one extreme outlier at 27.9Β°, found to be 5.57Β°. This was clearly more than expected, especially for data of this level of accuracy and precision, and the distribution had no obvious point that represented a qualitative shift in
the type of fixations identified. However, after 2.7Β°, 93.95 % of the fixations values have been covered and the tail of the distribution is visually parallel to the x-axis. Thus, any selected
value between 2.7 and 5.57 would be equally arbitrary. So, as even a dispersion threshold of 2.7 would be considered a generous threshold, we decided to not go beyond this and simply set the maximum dispersion at 2.7Β°. Do note that we used the original Salvucci and Goldberg (2000) definition of
dispersion (ymax β ymin + xmax β xmin) which is around twice as large as most other dispersion calculations (see p. 886, Blignaut, 2009). The dispersion calculation for the humans
was identical to the one implemented in the evaluated IDT algorithm.
Have a look at this message: https://discord.com/channels/285728493612957698/285728493612957698/836509594780041246
Very interesting! I always used: deviation = distance * tan(accuracy) and im super super bad with physics :D. Just need to learn the difference but Ill look into it
Hi. I am using pupil core and one of the cameras on one of my eye-trackers appears to no longer be detected by pupil capture. I am attaching images of the issue. Wondering if there are any suggestions for how to resolve the issue?
Please contact info@pupil-labs.com in this regard
HiοΌHow do I determine the gaze coordinates in the AOI region
Hi @user-345d29 π Use the Surface Tracker
plugin to define AOIs and mapped gaze positions. Documentation here: https://docs.pupil-labs.com/core/software/pupil-player/#surface-tracker
thank you @user-9429ba
and how do I display all of my vis polyline and fixation points on one map οΌ
this image is not what I want
@user-345d29 Hit the export data and checkout our tutorial here: https://github.com/pupil-labs/pupil-tutorials/blob/master/03_visualize_scan_path_on_surface.ipynb
Hi all, I have a question regarding the position of the nose in relation to the screen. Is the exact middle of the x-axis of the video screen located where the nose of the person is? I'm finding a slight trend to "the left" and I'm not sure if it is an actual result or caused by the shift in the camera
@papr , can you please help me?
Hello! I am looking to temporally synchronize EEG (recording using EGI NetAmps 300) and eye tracking (pupil core). What is the best method for this? I am currently looking at LSL - can you use LSL to sync EEG data that is recorded using EGI with the eye tracker or is the LSL used to actually record EEG data? Thank you!
Hi, lsl is a framework to send and receive time synchronized data. Most manufacturers provide client software that publishes data from their devices in real-time. Using the official recorder app, the data can be stored to disk.
For Pupil Core, there are to options. A relay plugin that publishes the Capture data and can be recorded with the official recorder. Or a recorder plugin that receives lsl data from other sensors within the Pupil Labs recording.
https://github.com/labstreaminglayer/App-PupilLabs/tree/lsl_recorder/pupil_capture
Thank you, so once capture records your EEG data - the eeg time series will be synchronized with the pupil capture time?
Hi All, I'm new to using the Core eye tracker and have been playing around with the Surface Tracker plugin. I see that it's important to specify the "real-world" width and height of surfaces, but I don't see a place to define units for these dimensions. Are there default units for surface height/width? Or is it more important that the relative dimensions are accurate (i.e., agnostic to units)? TIA for help on this matter!
It is actually not that important. It only has an influence on the heatmap resolution and the scaling of the mapped data.
The mapping happens into normalized coordinates (https://docs.pupil-labs.com/core/terminology/#surface-aoi-coordinate-system). From there it can be scaled with any units you like.
So if I export gaze/fixation data for those surfaces, changing the surface dimensions won't influence those coordinates? just the heatmap?
Correct. The gaze on surface data comes in two types of coordinates. normalized and scaled. The latter is just normalized * dimensions
Understood. Thanks!
Yes, Capture stores the incoming data in Pupil time.
It looks like our EGI netstation is too outdated to use the LSL. We are using NetAmps 300. We are considering different ways of synchronizing our system. Do you know if Pupil Core can receive digital markers (basic electrical signal) from an older system like our EGI NetAmps 300? Thank you again!
I have a question regarding surface tracking with Pupil Core. I used markers so I managed to set surface for desired area, but the surface then doesn't move together with this area during video. Does anyone have any ideas what could be the problem?
In your shared recording, the issue is that the surface is defined in relation to the markers, not in relation to the screen content. The markers do not move, therefore the surface does not move either. If you wanted to track someone's face with that method, they would need to attach an apriltag marker to their head. I think the post-hoc face detection and gaze mapping approach is more suitable for your use case.
Could you make a recording with Capture and share it with [email removed] We can have a look
Sure, thank you so much! It should be in your mail. This is just a test recording, but we are trying to measure eye gazes toward face as a whole and toward eyes. So my other question would be if we can also make surfaces more round? Or what would be the best way to track faces? I found Face Mapper enrichment but I think it is only fo Pupil Invisible? Is anything similar possible for Pupil Core?
I think the easiest way would be to run a face detector on the recorded video and use the gaze_positions.csv export data to check if gaze falls onto any of the detected faces. See this tutorial on how to extract frames from an exported video https://github.com/pupil-labs/pupil-tutorials/blob/master/09_frame_identification.ipynb
Hello, I would like to ask what interface your eye camera module uses,thanks
Hi, we use the UVC protocol to access and control the cameras.
Hi! I'm currently calculating amplitudes and velocities for my gaze data. For this, I need the coordinate's size in deg. visual angle. We recorded with the wide angle lenght and a resolution of 1280x720p. On the HP, the FOV is given as 99Β°x53Β°. This now leads to non-square pixel sizes in deg. visual angle (99/1280=0,077, while 53/720=0,0736). The same holds for the other resolutions on the HP. I assume that the sensor actually does have square pixels and that the FOV might be rounded or the like. Can somebody help me out here?
.
Hey Anna, Calculating velocities is not related with camera angles since an eye model is already calculated by Pupil Capture. In your case - I guess finding VOR, you can interest in just coordinates in a surface. By using 3D gaze generator feature, I think you can calculate polar coordinate indicators, and then velocities.
Thank you for your answer. We are using the 2D model and I want to calculate the angular velocity.
@user-a594fc in this case, you can use the scene camera intrinsics to unproject 2d gaze into cyclopian 3d gaze directions (cyclopian, since the origin of the direction is the camera origin) and calculate angular differences between these 3d vectors
We use this method in our fixation detector https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/fixation_detector.py#L142-L150
This method is independent of where the stimulus is shown, i.e. you do not need to worry about your display's FOV
Hello, we would like to study the eye camera transmission method of this product. Could you please provide us with the product models of eye camera and IR LED. If possible, can you briefly describe the transmission method of the eye camera? Thanks a lotttttttttttttt!!!
Could you please elaborate on what you mean by transmission method?
Pupil Capture does not have built-in support for that. But you can build plugins to extend the software. https://docs.pupil-labs.com/developer/core/plugin-api/#adding-a-plugin Also, it is likely that such a plugin has been developed before. Unfortunately, I am not aware of any concrete implementations right now.
Thank you!
Gaze is estimated in the scene cameras coordinate system. We do not shift it virtually to align it with the nose.
Thank you!
Hi folks,
As i told in a previous post: I want to track gaze behavior while driving a car. I thought of a Single Marker Calibration in 1-2,5m distance and a Natural Feature Validation at high distances as suggested by Neil.
Unfortunately that results in a bad accuracy during validation (> ~3Β°) even though Calibration shows high Accuracy ~1.5Β°. ( I'm using the 3D-pipeline).
In "Best practices" it says:
"If your stimuli are at a fixed distance to the participant, make sure to place calibration points only at this distance. If the distance of your stimuli varies, place calibration points at varying distances."
Can someone explain to me (mathematically) /provide Links what erros can occure when calibration distance β validation distance. Although I noticed that the accuracy is often gaze-angle dependent.
I think i will do everything with the Natural Feature Choreography
Greetings ,Cobe
Hi @user-b91cdf. Firstly, did you cover all of the gaze angles of interest during the calibration? For example, one can expect validation accuracy to be gaze angle dependent if you only calibrated in the centre of the visual field. Secondly, what happens if you use the natural features for calibration and validation?
Thank you for your response. I am now doing as you said. Looking at https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py I only find the 1080x720 res intrinsics for the radial lens. We are using the fisheye lens in the experiment, but with a reduced resolution of 1080x720 - but there is only the intrinsics for the full 1920x1080. Should I use the values from the radial lens istead?
Are you using a Pupil Core camera or a custom camera?
@papr We know that your product uses the camera module of OV6211. But I am trying to understand: 1. How to combine IR LED and eye camera? Since they seem to be independent. 2. ov6211 seems like using mipi protocol, how to convert it to type C interface? 3. Could you please provide us with all the product models used from the eye camera to the USB?
I do not know how much of that information we disclose publicly. Please reach out to info@pupil-labs.com in this regard
Pupil Core. Original model from ~2017/8 π - might be even older.
In this case, use the radial model intrinsics for the 720p resolution
Thanks! That worked. Another question on the output of undistortPoint - these are the angles of the spherical coordinate system right? The values are (in my case) mostly between -0.5 and 0.5, not what I expected, I expected something in the range of 99Β°x53Β°. Ultimately I need the angular velocities in Β°, thus if I put subsequent samples in cosine-distance and multiply by the sample time difference (approx 240Hz), I expected to get typical velocities.
no, the output is in euclidian coordinates. You can use the cosine distance to calculate angular differences
unfortunately, I dont understand. In what unit is the output? if I want angular differences/velocities, I need to have them in visual angle, not in px or camera range. If it is in euclidian coordinates, I could just use eucledian difference, but the values would be much too small (max difference is 1 on the diagonal of -0.5/0.5 box).
the resulting output is a 3d direction in mm. (0, 0, 1) is the direction straight forward out of the scene camera. See the definition of https://en.wikipedia.org/wiki/Cosine_similarity A and B should be two gaze 3d vectors. The result is the angular difference between them. If you divide by the duration between the samples you get the velocity in angles/s
ah I see, I only get a 2D vector back - I might use an old pupil lab version, will investigate
ah, then you are using the wrong function. use unprojectPoints instead
it's the right function, but a relatively old version (2 years ago or so), there is no cv2.convertPointsToHomogeneous yet, will try this
Then you just need to update you opencv version
But this can easily be reimplemented
sorry, missunderstanding, there is the line cv2.convertPointsToHomogenous not yet in the pupil labs function
but it just adds the third dimension with "1's", the ranges of the outcoming datapoints are still between -0.5 and 0.5
ah I missed your comment about mm, this is exactly what I was looking for. Thus you implicitly assume a sphere with radius 1mm - gotcha
Hey all, I keep having one of my cameras disconnect, which I think is a loose wire but not sure. Is there a typical place that it would be loose or breaking from that I can try to repair?
Please contact info@pupil-labs.com in this regard.
Sry for the late reply, Yes i think with the SIngle Marker calibration and the spiral pattern i covered most gaze angles. The natural feature calibration works quite well. During calibration i didn#t allow headmovement to cover most gaze angles (used 9 point) and during validation i allowed headmovement as the driver would normally look. reported median angular accuracy ~1.2Β°
I'm glad to hear that the natural features calibration is working well for you. This approach does require good communication between the operator and wearer, such that the wearer knows exactly which features to look at.
With post-hoc pupil detection and manual calibration, it was mentioned previously that you can edit the 3D model to exclude eyelashes, etc. How is this possible in Pupil Player? I only see (under the 'post-hoc pupil detection' tab) "detect pupil positions from eye videos" and then details on eye0 and eye1, along with detection progress. Many thanks!
Hi, the eye lashes can be excluded from the 2d pupil detection by setting a region of interest for the detector. For that, start the post-hoc detection (the eye windows should open), pause the detection via the menu in the main Player window. Then, in each eye window, change from camera to ROI mode (general settings menu) and adjust the ROI accordingly. Afterward, go back to the main Player menu and restart the pupil detection using the corresponding button. This should apply the new ROI from the beginning of the recording.
@user-aa7c07 for reference, this is what setting the ROI looks like: https://drive.google.com/file/d/1NRozA9i0SDMe_uQdjC2jIr000iPjqqVH/view?usp=sharing Be sure not too set the ROI too small such that the pupil is not detected. This is easier to check in a real-time context as you can ask the wearer to look around whilst you adjust the ROI
Thanks Neil! Excellent! Will give that a go π Matt
Hi all, I am using the offline surface exporter on Player v3.5.1 to export surface data from a ~15 minute recording made with Capture v3.4.0, and my surface_events.csv file does not make sense to me - while all of the other surface related data files have amounts of data and entries that make sense for our recording, the surface events file has only ~100 entries and is not consistent with the "on_surf" entries in the gaze position files even though the surfaces were visible for 41153/41277 frames. Is there a step I am forgetting in the export that could be causing this inconsistency? I have attached the events file. Thank you!
The file indicates when a surface becomes visible for the first time (enter) and when it is no longer visible (exit). It makes sense that you only have few events if the surface was visible for the majority of the frames. The file does not refer as to when gaze enters or exits the surface.
Thank you! I was not understanding the terminology used in the docs "image enter/image exit."
Do core glasses have TTL input and output?
No. You would need to write a custom plugin to add that functionality
Iβm wondering whatβs going on in the pupil_capture_lsl_relay.py It seems lsl timestamps are overwritten with the capture clock. But that would mean you would lose the ability to synchronize pupil labs with other LSL streams? Presently Iβm recording data that requires synchronization with different streams; and when I look at the lsl timestamps all streams start in the same time range roughly except for the pupil labs timestamps. They suggest that the eye data occurs millions of seconds before the other streams.
The plugin adjusts the Capture clock to the local LSL clock. This way we can use native Pupil Capture timestamps instead of generating new timestamps on sending the data out https://github.com/labstreaminglayer/App-PupilLabs/blob/master/pupil_capture/pupil_capture_lsl_relay.py#L34-L36
I think the overwriting of these timestamps should simply be left out, right? The whole point of LSL plug-in is to have data in LSL times rather than local capture times.
If we would do the latter we would disregard the processing delay of the data and actively desync the data.
Hi I have been work on some custom code to fit the pyd3d model using this code provided by @papr last spring https://nbviewer.jupyter.org/gist/papr/48e98f9b46f6a0794345bd4322bbb607
For this could you please tell me how to find the focal_length and resolution that was used for the recording? At the moment my code uses the default values but I'm not sure if these are correct for my recording. Many thanks.
The recording should include eye*.intrinsics
files. The focal_length can be inferred from the camera matrix included in the files https://github.com/pupil-labs/pupil/blob/master/pupil_src/shared_modules/camera_models.py#L252-L255
I figured it would have something to do with the eye*.intrinsics files. So to get focus length I would need to use
@property
def focal_length(self):
fx = self.K[0, 0]
fy = self.K[1, 1]
return (fx + fy) / 2
to obtain it from the eye*.intrinsics file? Do you have any example code of how to do this (e.g. how to open the eye.intrinsics file)?
See this message https://discord.com/channels/285728493612957698/446977689690177536/913512054667747389
Great! Thanks very much. I'll try that now.
Hi @papr I have been able to open the eye intrinsic files (thanks for that) and extract the resolution however I am not sure how to get focal length using the function you provide. When using the function def focal_length(self)
to get the value of focal length what value from the intrinsics file do I use for the input (i.e where does 'self' come from)?
The linked function is a method of the CameraModel class. The self refers to the class instance to which it is bound. You do not need it. You just need the core logic, extracting two numbers from the camera matrix K and average them
Humm... okay! So to get the focal length for eye 1 would should I use XXX? Would it be 'Eye1_intrinsics.K' (which is my file for eye 1)? ``` fx = XXX.K[0, 0] fy = XXX.K[1, 1] focal_length = (fx + fy) / 2
The loading code should return a Python dictionary. I suggest printing it to get an overview. Use this code to get K:
import numpy
loaded_dict = ...
resolution = str((192, 192)) # adjust accordingly
K = loaded_dict[resolution]["camera_matrix"]
fx = K[0][0]
fy = K[1][1]
focal_length = (fx + fy) / 2
@user-a09f5d see above as well
Thanks @papr . So what I now have is:
with open(intrinsics_dir + "eye0.intrinsics", "rb") as fp:
Eye0_intrinsics = msgpack.unpack(fp)
#Get resolution
Eye0_resolution = Eye0_intrinsics["resolution"]
#Get focal length
K = Eye0_intrinsics["resolution"]["camera_matrix"]
fx = K[0, 0]
fy = K[1, 1]
Eye0_FocalLength = (fx + fy) / 2
I guess K = Eye0_intrinsics["resolution"]["camera_matrix"]
could also be K = Eye0_intrinsics[Eye0_resolution]["camera_matrix"]
.
I do not think that would work. Please print Eye0_intrinsics
and share it here. I might be wrong regarding the file format that I have in mind
Yeah, I tried it and it indeed didn't work.
{'version': 1, '(192, 192)': {'camera_matrix': [[282.976877, 0.0, 96.0], [0.0, 283.561467, 96.0], [0.0, 0.0, 1.0]], 'dist_coefs': [[0.0, 0.0, 0.0, 0.0, 0.0]], 'resolution': [19
2, 192], 'cam_type': 'radial'}}
That is what I get when i print Eye0_intrinsics
K = Eye0_intrinsics["(192, 192)"]["camera_matrix"]
Would '["(192, 192)"]' change if the resolution changed?
That works for
#Get resolution
Eye0_resolution = Eye0_intrinsics["(192, 192)"]["resolution"]
But through up an error for K = Eye0_intrinsics["(192, 192)"]["camera_matrix"]
fx = K[0, 0]
TypeError: list indices must be integers or slices, not tuple
Please use K[0][0]
instead of K[0, 0]
. The latte assumes K to be a numpy array (special type) instead of a primitive Python list of lists.
Ah yes sorry! I missed that bit. My bad.
That seems to be working now. Wonderful! Thank you very much
Regarding my other question, would '["(192, 192)"]' change if the resolution changed?
Yes
Hi,
How does pupil player get the gaze position for a given frame from all the available positions? Is it a confidence weighted average?
Gaze data is assigned to scene video frames based on their timestamps, i.e. multiple gaze positions are assigned to a single frame. If you have to choose a single gaze position per frame, I recommend choosing the datum that is temporally closest to the frame.
Is this how pupil player determines the gaze position for the display?
And yes.
Pupil Player displays all gaze points for the current frame (minus those that have insufficient confidence)
Thank you. Will do this then. Also, what if the closest data point is low confidence? Do we drop that point or pick the next best one?
Pupil Player simply tries to render all points associated with the current frame. It will not render those with low confidence. If there is no gaze with sufficient confidence, no gaze will be rendered. There is no active selection process.
@paprHi all,I have the gaze point in "gaze_positions_on_surface_Surface 2.csv" and fixations in 'fixations_on_surface_Surface 2.csv', but I want to know which gaze point in "gaze_positions_on_surface_Surface 2.csv" is fixation, How can I do? Or Can I match the fixations to the gaze point in "gaze_positions_on_surface_Surface 2.csv" ?
You can do that by looking up the fixations by their ids in the fixations.csv
file. It contains the start time and the duration in ms. With the latter you can calculate the end timestamp. Then you can go through the gaze_positions_on_surface_Surface.csv
and use the gaze_timestamp
value to check if it falls into any of the fixation ranges.
I see. Thank you very much
Hi there, I am looking to clean up my gaze mapping and see that one eye pupil tracked well while the other did not. Is it possible to only use one eye for the calibration and gaze mapping? if so, how?
Have you checked the confidence of the data? A binocular gaze datum will only be based on both eyes if they provide a pupil detection with a confidence of 0.6 or higher. Otherwise, the data will be mapped monocularly in which case you can easily filter out the low confidence data.
Recommendation: The first step to cleaning up gaze data should be removing the low confidence data.
Ok, I can adjust the confidence, some of them are really low (<0.5) but some are higher, I'll try calibrating with a higher confidence threshold, thanks
Are you performing post-hoc calibrations?
yes
In this case there is a second approach: Close Player, rename the eye video files with the bad detection (everything that starts with eye0
or eye1
), restart Player, run the post-hoc pupil detection (will only run for the untouched eye video file), and run the calibration on the monocular pupil data.
oh good to know. I'll try that if the higher calibration threshold does not work
hey, why is the price for the "HTC Vive Binocular Add-on" so high?
Hi @user-96f342 π VR/ARΒ productsΒ are priced relative to the academic discount model we have for Pupil Core headsets (similar HW and components). Additionally, VR/AR add-ons areΒ custom-made research tools made in low-er quantities, where margins are already slim. I hope I have been able to respond to your question.
Hi, this is Chaipat. I and my research team have bought your pupil core and enjoyed using it. But today as my team were changing the world camera, the world camera stopped working. We inspected the camera and found that the image sensor mirror has a crack. Would it be possible to replace only this part? Would you please be able to guide us to the where we can buy one? Thanks!
Please contact info@pupil-labs.com in this regard
Thanks @papr i have emailed yesterday but havenβt heard back from them. Thanks for the info though
@papr Would like to know if there any algorithm could analyse the data and obtain the "succade amplitude" and "succade velocity"?
Hi @papr I've seen the future "Reference Image Mapper (beta feature)". Do you think that in the future will be possibile to draw region of interest on the reference image and export data? Thanks
Hi @user-beb12b! This is a feature we have on the roadmap, but it will still take several months for it to become available and I can not yet tell you a more precise release date.
If you are comfortable with Python, you can however consider this notebook that implements such a feature: https://github.com/pupil-labs/analysis_drafts/blob/main/01%20-%20Products%20in%20a%20Shelf.ipynb
that's amazing. Personally I believe that this feature will really make the difference. That's why people usually prefer tobii, but if you'll be able to bring this feature and increase its accuracy of the automatic mapping (because I understand that at this stage the accuracy is not really fine), you will get a lot of new clients, and I will be one of those clients as well! A suggestion that I can give you at this stage is to give the ability to the researcher to correct manually the automatic mapping in case the automatic mapping will be not correct. In addition, increasing the accuracy of the automatic mapping in order to use the automatic mapping also on dynamic and small stimuli (e.g. mobile devices while browsing a web site/ app) would make the difference. What do you think about it? thanks
Thanks a lot for your feedback @user-beb12b! I'd like to give you a thorough answer on this. Let's move this to the πΆ invisible channel, as discussing Pupil Cloud features will be relevant to more people there!
@papr (Pupil Labs) Would like to know if the Pupil Core could measure the distance between the eye and the fixation (fixating object) in the export data?
Many Thanks
gaze_point_3d
describes the point that the subject looks at. You can technically measure the distance between the eye and the fixation point using this and the eyeball centre (eye_center0/1_3d
). Important note: gaze_point_3d
is defined as the intersection of the visual axes of the left and right eyes, and the depth component might not be accurate in all viewing conditions as measuring depth via ocular vergence is inherently difficult. Another option would be to use the head pose tracker to determine distance between the wearer and a given stimulus https://docs.pupil-labs.com/core/software/pupil-player/#head-pose-tracking
Hi, can I get some clarification on the data output (using pupil player/capture v3.4.0). I have a scene set-up with 4 surfaces. The surface data does not correspond to what occurs in the world video. Ie. a given fixation which is clearly within a detected surface (as seen on the video) either does not show as true in the corresponding fixations_on_surface_X file or shows as True on an incorrect surface. Also, the "(Re)calculate gaze distributions" button is not displayed in the surface tracker within Pupil player (offline surface tracker).
Hi @user-fcc3ea π. How are you identifying fixations in the export that correspond to the fixations shown on a given surface in the world video? Re. the "(Re)calculate gaze distributions", this button was removed when we updated the user interface β we need to also remove from the docs π
Hi All, we were using our Core headset with a Mac laptop (pupil capture v. 3.5.7, booted from the terminal on OS Monterrey) and getting images from the world view camera and eye cameras just fine, but now are unable to get an image from the world view camera at all. I've tried rebooting the computer and still no image. I've also tried moving the headset to another computer with pupil capture v 3.4 and OS big Sur, but still no world view camera image. Is the camera bad? Please advise. Thanks!
Edit to add - based on the error message, it seems that the world camera is not being detected "world - [ERROR] video_capture.uvc_backend: Could not connect to device! No images will be supplied."
Hi, please contact info@pupil-labs.com in this regard.
hello. When pupil player crashes, the new annotations are not saved, right?
Unfortunately not π The crash is due to insufficient RAM memory on your computer. Even if this error had been caught, writing data to disk also requires memory, i.e. saving the annotations might have also failed. I have not found a good strategy to avoid this issue yet. π
player - [ERROR] launchables.player: Process Player crashed with trace:
Traceback (most recent call last):
File "launchables\player.py", line 745, in player
File "src\\pyglui\\custom.pxi", line 213, in pyglui.ui.Seek_Bar.handle_input
File "src\\pyglui\\ui.pyx", line 156, in pyglui.ui.UI.handle_input
File "src\\pyglui\\menus.pxi", line 821, in pyglui.ui.Container.handle_input
File "src\\pyglui\\menus.pxi", line 825, in pyglui.ui.Container.handle_input
File "src\\pyglui\\custom.pxi", line 213, in pyglui.ui.Seek_Bar.handle_input
File "src\\pyglui\\custom.pxi", line 222, in pyglui.ui.Seek_Bar.handle_input
File "seek_control.py", line 132, in set_playback_time
File "seek_control.py", line 138, in set_playback_time_idx
File "plugin.py", line 224, in notify_all
File "zmq_tools.py", line 206, in notify
File "zmq_tools.py", line 171, in send
File "msgpack\__init__.py", line 35, in packb
File "msgpack\_packer.pyx", line 120, in msgpack._cmsgpack.Packer.__cinit__
MemoryError: Unable to allocate internal buffer.
player - [INFO] launchables.player: Process shutting down.
Fixations are defined at export and appear to follow sensible operations. The recordings were taken of aircrew during a simulation experiment and their identified fixations follow button presses and system interaction as expected. The surfaces were defined prior but turned off during the actual recording as it dropped the frame counter down to 4fps. In post analysis, when re-enabled, the surfaces appear to track fine, however the fixations do not correspond the correct surfaces. Is there an extra step I need to complete because the surface tracker was not running during the recording?
Would you be able to share a recording with [email removed]
Looking for help.
The eye tracking camera on the left will fail after a few minutes of useοΌ and the red light doesn't go on. The CPU usage of that camera also dropped rapidly
I'm not sure I can give you any more information. Is the hardware broken?
OhοΌ I found the answer! Thanks!
Hi, so in my research, I've run into an issue where for those with smaller nose bridges, I cannot maintain a consistent position where the eye cameras can track the pupil, no matter how I adjust the cameras (sliding them across the bar, etc.) What method(s), if any, would you recommend to combat this issue?
In such cases, affixing some self-adhesive foam to the frame bridge can help to raise the headset on the nose. I would also recommend using a head strap to further secure the headset on the wearer.
hello I cannot find the Vibasept wipes in the UK. Nor any product with the exact composition 22g Ethanol 21g etc... Here I only seem to find 70% or 80% alcohol wipes or alcohool free witpes. do you have any suggestions? What product can i use to clean that is not going to damage the pupil core hardware?
Hi, I am using MacOS with M1 and I want to run pupil labs following https://github.com/pupil-labs/pupil/blob/master/docs/dependencies-macos.md. However, when installing dependencies, it could not build wheels for pupil-detectors, av, uvc. I also upgraded pip wheel. You can find the error code in the txt file. Can you help me with that?
Hi @user-6df3b1 π. Was there a specific reason that you are running from source? Nearly everything can be done using the plugin or Network API of the bundled application: https://github.com/pupil-labs/pupil/releases/download/v3.5/pupil_v3.5-7-g651931ba_macos_x64.dmg
Hi @nmt Thanks! That would work. But I want to code a pipeline to use photos taken with the glasses (extract them), to later import them to the software and mark the pupil. That is not possible with the application so I would need to modify the source code, isn't it?
I'm not 100% sure that I understand you aims. Can you elaborate a bit more on your intended use case?
Hi Neil, thanks again for your help! We will try it with the bundled application for now π
Happy New Year to everyone. I have a few questions about how to use Pupil Labs in VR, but first of all, is there anyone that can tell me if by any chance the results of the pupil labs are time-variable? or it can only collectively tell you which points have caught more attention?!
Yes, our software provides timestamped gaze estimates in scene camera coordinates, i.e. the subject's field of view. In Unity/hmd eyes, this would correspond to Unity's main camera. Gaze can then be transformed into Unity world coordinates and e.g. be aggregated on objects.