Hi there, we've started encountering this problem where, despite the pupil being very well contrasted with the iris, the detector refuses to look where the pupil is due to the positioning of these boxes you can see in this image. They keep moving on and off of the pupil. Do you know what might be happening and how we might be able to correct for this?
You need to reset the ROI (region of interest). It is set to the bottom half (at least it looks like that)
It's not the ROI, the ROI is set to the full image. Those smaller boxes that keep shifting position appear to be what's preventing the pupil from being detected, since they keep moving away from where the pupil is
You can sort of see the thin white box that is the ROI on the edges of that eye image
isn't the grey dotted line the roi?
These two boxes?
Partial-image ROI
Full-image ROI
Please try restarting with default settings and ensure there are no custom pupil plugins or network api script running
They're not the ROI, no
Those boxes seem to constantly be changing position on their own
Unless it's a separate dynamically-updating ROI that moves based on where the darkest portions of the image are or something like that
I checked the ROI you can adjust via the gear at the top right of the eye window, and the ROI boundary points are set to the four corners of the image.
@user-3cff0d there are dynamic rois, yes, but they correspond to the light blue rectangles
@user-3cff0d I have no clue though why the ROI handles would not correspond to the ROI area displayed in your screenshot.
I wish I'd recorded a video of it, but yeah those dotted line boxes were automatically moving and updating themselves
Sounds like they were being set via the network API then (or a plugin)
This was with the posthoc-vr-gazer branch of PLC, running from source, and it was running alongside an HMD-Eyes Unity project
Would HMD-Eyes do something like that?
Not that I know of. This is the notification topic with which you can set the roi: notify.pupil_detector.set_roi
. Should that happen again, I suggest subscribing to notify.pupil_detector
and checking if these notifications are being emitted.
Gotcha
Would that also update the big ROI circles that appear when you go to move the ROI box manually in the eye window?
It should, but I am not 100% sure.
Because those were still anchored in the corners
(By ROI circles I mean the points that you click-and-drag to adjust the ROI)
Hello! But here's a question. How are the marks synchronized in devices? Is there a synchro block on the "invisible/core"?
Is it possible to get a comparison table?
Hi, could you elaborate on what you mean by "mark"?
Analog Timestamps
There are no analog timestamps. All timestamps are recorded digitally. The cameras run independently. Each recorded video frame is timestamped independently. How this timestamp is generated depends on the product and OS it is running on.
Hi, I was able to reproduce the issue but unfortunately I was not able to find a stable solution, yet.
Okay, good that you ware able to reproduce the issue
Okay. And how is synchronization of streams from three cameras guaranteed: world, eye 1/2? And the synchronization between the stimulus on the screen and the eye camera?
Our products are head-mounted. As a result, stimuli and gaze are always recorded independently. The three video streams are timestamped from the same clock, I.e. streams are temporarily in sync but samples are not guaranteed to be taken at the same time. We employ matching algorithms that match samples from different streams to create temporarily close pairs.
To synchronize with external signals, e.g. an on-screen stimulus, we provide several options. Their recommendation depends on the used product and use case requirements.
Hello, I have been using your pupil cloud software to analysis some of my pupil invisible recordings and have ran into some issues with the analysis of marker mapper enrichment feature. When I use project events that I created in the video instead of recording.begin and recording.end when I download the results it appears that it just randomly picks a time to start analyzing the data as opposed to the project event that I listed as the from event start. This problem seems to go away as long as I use recording.begin and recording.end as the from event and to event. I was wondering if this is a know issue and that I can be fairly confident in the results as long as I use the recording.begin and recording.end as my settings?
This issue is not known. Check out the sections.csv file. It should contain the time ranges for each exported section. Do you get data outside of these time ranges?
The section.csv file has the correct start time for the chosen project event but data does not show up in gaze.csv or surface_positions.csv until 46 minutes after the start time. I know that there should be data there as when I used recording.begin/end there is data throughout the recording. my project event only cut off 3 minutes from the start and 2 minutes from the end.
Ok, thanks for checking! Could you share the enrichment id with us? With your permission, we would have a look at it to investigate the issue.
the data does end exactly at the end event and has no extra data
will that provide access to the video because it is participant data and the video might have identifing data that I cannot share
See our Privacy Policy https://pupil-labs.com/legal/privacy/#when-and-by-whom-is-your-data-accessed%3F
When and by whom is your data accessed? - [...] - In case our systems detect errors, we will seek to fix them. If this requires accessing your recording data, we will seek your permission to do so. However, as a general rule we will not access any actual video data in this process, unless the nature of the error requires this and you have given your consent to it. In other words, we will try to investigate the issue without looking at the video data. Should that become necessary, we will come back to you to seek further permission.
identifying*
Hi, I was wondering are there any possibilities for you to add new functionality to the calibration mode so that calibration can be done on multiple screens at the same time (For example, we have three screens and we want to display 5 dots one by one on each screen, or we can calibrate the three screens as a whole)? Or do you have any suggestions on how I could integrate such a feature into the API in an easier way? I'm confused as to where to start with this problem, we only want to use screen markers right now because it is easier and more convenient to be controlled. Thank you for your help!
Hi! You can use the single marker calib. choreography in physical mode and display the markers yourself. So instead of printing the marker and having someone move the marker, you can write a program that displays the marker across the three screens.
Thank you! I will try this!
Hi, I am running into issues trying to install from source. Working on a uni project and wanting to build a plugin. Windows 10 install I am continuing to get the following error on setup and I am not sure how to solve it. Worked through the Windows dependencies page word-by-word and still get this trying to start. Any help is appreciated!
player - [ERROR] launchables.player: Process player_drop crashed with trace:
Traceback (most recent call last):
File "C:\Users\wittw\Desktop\pupil\pupil_jku\pupil_src\launchables\player.py", line 858, in player_drop
from pupil_recording.update import update_recording
File "C:\Users\wittw\Desktop\pupil\pupil_jku\pupil_src\shared_modules\pupil_recording\update\__init__.py", line 15, in <module>
from video_capture.file_backend import File_Source
File "C:\Users\wittw\Desktop\pupil\pupil_jku\pupil_src\shared_modules\video_capture\__init__.py", line 40, in <module>
from .uvc_backend import UVC_Manager, UVC_Source
File "C:\Users\wittw\Desktop\pupil\pupil_jku\pupil_src\shared_modules\video_capture\uvc_backend.py", line 24, in <module>
import uvc
ImportError: DLL load failed: The specified module could not be found.
Hey, did you know that you don't need to run from source to develop a plugin? That said, you are likely missing the turbojpeg lib/don't have in your path.
I see stuff about DIY eye tracking. Are you able to hook arbitrary cameras up to the Pupil Capture software or do you need to have the Pupil Labs hardware in order to make use of this tracking?
Yes, you can use third-party cameras but they need to fulfil specific requirements to work out of the box. See this message for details https://discord.com/channels/285728493612957698/285728493612957698/747343335135379498
Ok awesome, thanks! I spent some time trying to see if I could make something to connect my cameras through NDSI and I almost have it, but something isn't quite right. My videos will have green/red blocks and stripes flickering over everything when they come through and will intermittently crash the program. I figure it might be easier to just just get it building from source so I can just connect it straight to to the back end
You might also want to have a look at this third-party project https://github.com/Lifestohack/pupil-video-backend