software-dev


user-3cff0d 11 April, 2022, 17:00:07

Hi there, we've started encountering this problem where, despite the pupil being very well contrasted with the iris, the detector refuses to look where the pupil is due to the positioning of these boxes you can see in this image. They keep moving on and off of the pupil. Do you know what might be happening and how we might be able to correct for this?

Chat image

papr 11 April, 2022, 17:34:19

You need to reset the ROI (region of interest). It is set to the bottom half (at least it looks like that)

user-3cff0d 11 April, 2022, 17:35:14

It's not the ROI, the ROI is set to the full image. Those smaller boxes that keep shifting position appear to be what's preventing the pupil from being detected, since they keep moving away from where the pupil is

user-3cff0d 11 April, 2022, 17:35:37

You can sort of see the thin white box that is the ROI on the edges of that eye image

papr 11 April, 2022, 17:36:10

isn't the grey dotted line the roi?

user-3cff0d 11 April, 2022, 17:38:30

These two boxes?

Chat image

papr 11 April, 2022, 17:46:28

Partial-image ROI

Chat image

papr 11 April, 2022, 17:46:05

Full-image ROI

Chat image

papr 11 April, 2022, 17:41:58

Please try restarting with default settings and ensure there are no custom pupil plugins or network api script running

user-3cff0d 11 April, 2022, 17:38:44

They're not the ROI, no

user-3cff0d 11 April, 2022, 17:38:53

Those boxes seem to constantly be changing position on their own

user-3cff0d 11 April, 2022, 17:39:21

Unless it's a separate dynamically-updating ROI that moves based on where the darkest portions of the image are or something like that

user-3cff0d 11 April, 2022, 17:40:13

I checked the ROI you can adjust via the gear at the top right of the eye window, and the ROI boundary points are set to the four corners of the image.

papr 11 April, 2022, 17:41:05

@user-3cff0d there are dynamic rois, yes, but they correspond to the light blue rectangles

papr 11 April, 2022, 17:47:41

@user-3cff0d I have no clue though why the ROI handles would not correspond to the ROI area displayed in your screenshot.

user-3cff0d 11 April, 2022, 17:48:53

I wish I'd recorded a video of it, but yeah those dotted line boxes were automatically moving and updating themselves

papr 11 April, 2022, 17:49:13

Sounds like they were being set via the network API then (or a plugin)

user-3cff0d 11 April, 2022, 17:50:00

This was with the posthoc-vr-gazer branch of PLC, running from source, and it was running alongside an HMD-Eyes Unity project

user-3cff0d 11 April, 2022, 17:50:12

Would HMD-Eyes do something like that?

papr 11 April, 2022, 17:53:26

Not that I know of. This is the notification topic with which you can set the roi: notify.pupil_detector.set_roi. Should that happen again, I suggest subscribing to notify.pupil_detector and checking if these notifications are being emitted.

user-3cff0d 11 April, 2022, 17:53:44

Gotcha

user-3cff0d 11 April, 2022, 17:54:14

Would that also update the big ROI circles that appear when you go to move the ROI box manually in the eye window?

papr 11 April, 2022, 17:55:23

It should, but I am not 100% sure.

user-3cff0d 11 April, 2022, 17:54:27

Because those were still anchored in the corners

user-3cff0d 11 April, 2022, 17:54:46

(By ROI circles I mean the points that you click-and-drag to adjust the ROI)

user-f93379 12 April, 2022, 11:39:40

Hello! But here's a question. How are the marks synchronized in devices? Is there a synchro block on the "invisible/core"?

Is it possible to get a comparison table?

papr 12 April, 2022, 11:40:31

Hi, could you elaborate on what you mean by "mark"?

user-f93379 12 April, 2022, 11:51:46

Analog Timestamps

papr 12 April, 2022, 11:53:17

There are no analog timestamps. All timestamps are recorded digitally. The cameras run independently. Each recorded video frame is timestamped independently. How this timestamp is generated depends on the product and OS it is running on.

papr 12 April, 2022, 13:35:37

Hi, I was able to reproduce the issue but unfortunately I was not able to find a stable solution, yet.

user-74ef51 12 April, 2022, 13:36:32

Okay, good that you ware able to reproduce the issue

user-f93379 12 April, 2022, 13:56:14

Okay. And how is synchronization of streams from three cameras guaranteed: world, eye 1/2? And the synchronization between the stimulus on the screen and the eye camera?

papr 12 April, 2022, 14:06:23

Our products are head-mounted. As a result, stimuli and gaze are always recorded independently. The three video streams are timestamped from the same clock, I.e. streams are temporarily in sync but samples are not guaranteed to be taken at the same time. We employ matching algorithms that match samples from different streams to create temporarily close pairs.

To synchronize with external signals, e.g. an on-screen stimulus, we provide several options. Their recommendation depends on the used product and use case requirements.

user-5882af 13 April, 2022, 18:12:09

Hello, I have been using your pupil cloud software to analysis some of my pupil invisible recordings and have ran into some issues with the analysis of marker mapper enrichment feature. When I use project events that I created in the video instead of recording.begin and recording.end when I download the results it appears that it just randomly picks a time to start analyzing the data as opposed to the project event that I listed as the from event start. This problem seems to go away as long as I use recording.begin and recording.end as the from event and to event. I was wondering if this is a know issue and that I can be fairly confident in the results as long as I use the recording.begin and recording.end as my settings?

papr 13 April, 2022, 18:14:48

This issue is not known. Check out the sections.csv file. It should contain the time ranges for each exported section. Do you get data outside of these time ranges?

user-5882af 13 April, 2022, 18:23:14

The section.csv file has the correct start time for the chosen project event but data does not show up in gaze.csv or surface_positions.csv until 46 minutes after the start time. I know that there should be data there as when I used recording.begin/end there is data throughout the recording. my project event only cut off 3 minutes from the start and 2 minutes from the end.

papr 13 April, 2022, 18:25:05

Ok, thanks for checking! Could you share the enrichment id with us? With your permission, we would have a look at it to investigate the issue.

user-5882af 13 April, 2022, 18:24:15

the data does end exactly at the end event and has no extra data

user-5882af 13 April, 2022, 18:28:07

will that provide access to the video because it is participant data and the video might have identifing data that I cannot share

papr 13 April, 2022, 18:35:49

See our Privacy Policy https://pupil-labs.com/legal/privacy/#when-and-by-whom-is-your-data-accessed%3F

When and by whom is your data accessed? - [...] - In case our systems detect errors, we will seek to fix them. If this requires accessing your recording data, we will seek your permission to do so. However, as a general rule we will not access any actual video data in this process, unless the nature of the error requires this and you have given your consent to it. In other words, we will try to investigate the issue without looking at the video data. Should that become necessary, we will come back to you to seek further permission.

user-5882af 13 April, 2022, 18:28:20

identifying*

user-e242bc 26 April, 2022, 11:28:50

Hi, I was wondering are there any possibilities for you to add new functionality to the calibration mode so that calibration can be done on multiple screens at the same time (For example, we have three screens and we want to display 5 dots one by one on each screen, or we can calibrate the three screens as a whole)? Or do you have any suggestions on how I could integrate such a feature into the API in an easier way? I'm confused as to where to start with this problem, we only want to use screen markers right now because it is easier and more convenient to be controlled. Thank you for your help!

papr 26 April, 2022, 11:30:32

Hi! You can use the single marker calib. choreography in physical mode and display the markers yourself. So instead of printing the marker and having someone move the marker, you can write a program that displays the marker across the three screens.

user-e242bc 26 April, 2022, 11:41:37

Thank you! I will try this!

user-c991da 26 April, 2022, 17:44:04

Hi, I am running into issues trying to install from source. Working on a uni project and wanting to build a plugin. Windows 10 install I am continuing to get the following error on setup and I am not sure how to solve it. Worked through the Windows dependencies page word-by-word and still get this trying to start. Any help is appreciated!

player - [ERROR] launchables.player: Process player_drop crashed with trace:
Traceback (most recent call last):
  File "C:\Users\wittw\Desktop\pupil\pupil_jku\pupil_src\launchables\player.py", line 858, in player_drop
    from pupil_recording.update import update_recording
  File "C:\Users\wittw\Desktop\pupil\pupil_jku\pupil_src\shared_modules\pupil_recording\update\__init__.py", line 15, in <module>
    from video_capture.file_backend import File_Source
  File "C:\Users\wittw\Desktop\pupil\pupil_jku\pupil_src\shared_modules\video_capture\__init__.py", line 40, in <module>
    from .uvc_backend import UVC_Manager, UVC_Source
  File "C:\Users\wittw\Desktop\pupil\pupil_jku\pupil_src\shared_modules\video_capture\uvc_backend.py", line 24, in <module>
    import uvc
ImportError: DLL load failed: The specified module could not be found.
papr 26 April, 2022, 19:40:31

Hey, did you know that you don't need to run from source to develop a plugin? That said, you are likely missing the turbojpeg lib/don't have in your path.

user-1b6057 26 April, 2022, 20:27:54

I see stuff about DIY eye tracking. Are you able to hook arbitrary cameras up to the Pupil Capture software or do you need to have the Pupil Labs hardware in order to make use of this tracking?

papr 27 April, 2022, 06:15:59

Yes, you can use third-party cameras but they need to fulfil specific requirements to work out of the box. See this message for details https://discord.com/channels/285728493612957698/285728493612957698/747343335135379498

user-1b6057 27 April, 2022, 06:33:16

Ok awesome, thanks! I spent some time trying to see if I could make something to connect my cameras through NDSI and I almost have it, but something isn't quite right. My videos will have green/red blocks and stripes flickering over everything when they come through and will intermittently crash the program. I figure it might be easier to just just get it building from source so I can just connect it straight to to the back end

papr 02 May, 2022, 07:22:01

You might also want to have a look at this third-party project https://github.com/Lifestohack/pupil-video-backend

End of April archive