πŸ‘ core


user-b62a38 02 June, 2024, 02:47:26

HI there, Im having trouble with the surface recording, the surface markers are set and saved, and the arrow is the right way, but there is nothing in the exported gaze_positions_on_Surface_[screen name] file. Any suggestions?

user-07e923 02 June, 2024, 06:17:50

Hi @user-b62a38, thanks for reaching out! πŸ™‚ Could you try clearing the pupil_player_settings folder and try running Pupil Player again? This folder is in Drive > Users > Username (on Windows),

user-b62a38 02 June, 2024, 07:11:53

Thank you! 😊

user-d1f142 03 June, 2024, 18:29:32

Hi, I'm currently looking at someone else's data. This data has the strange effect that when surface data is exported, the x and y coordinates on the surface are perfectly correlated (they're not the same, but almost). This can also be seen in the surface heatmap that can be displayed in in the Pupil Player - It is a straight line from the middle of the surface to the top right corner of the surface, with most of it located in the top right corner. I have never seen such an effect before with my own data. The surface seems OK, and the gaze coordinates as well. But when compared, the output surface coordinates do not match the expected result at all. Is there something obvious that I have overlooked that can result in such surface data?

user-f43a29 06 June, 2024, 08:49:33

Hi @user-d1f142 , could you restart Pupil Player with default settings and then give it another try? You can find this under the General Settings tab in the sidebar of Pupil Player.

user-2c4f22 04 June, 2024, 19:11:46

Hello! Would like some guidance here - this may be a "noob" question we are new to eye tracking. We have a Pupil Labs Core and want to run an experiment with users looking at a screen (iPad or Mac) and having the ability to use the eye tracker to move the mouse cursor on the screen (mouse cursor follows the gaze) and click on any buttons (either by fixating their gaze on the button for a specified duration or by blinking some number of times). Is doing something like this straightforward, and are there any guides for setting this up? I saw this online (https://docs.pupil-labs.com/alpha-lab/gaze-contingency-assistive/) but it seems like it's for Neon, and we have a Core.

nmt 05 June, 2024, 01:38:01

Hey @user-2c4f22! You might want to check out this community contributed project - https://github.com/emendir/PupilCore-CursorControl

user-3c6ee7 05 June, 2024, 02:09:31

Hey guys, I am trying to load previously used surface definitions for other recordings. I have copied the surface definitions v1 file into the other recording folder. However, when I load the recording into Pupil Player, it takes a long time to load with a white-ish screen. I have left it running for one hour with no success. I note here (https://docs.pupil-labs.com/core/software/pupil-player/) that it says it may take a few seconds because the surface tracker plugin and assessing the whole recording. My recordings are about 5 minutes long and I have 25 predefined surfaces. Is this normal for it to take this long to load?

nmt 06 June, 2024, 00:42:29

Hi @user-3c6ee7 πŸ‘‹. This can depend on the computer and available RAM, since the marker cache needs to be rebuilt for the detections in the new recording(s). Twenty-five surfaces is quite a lot, so the time you report is not outside the realm of possibility. Did it finish in the end?

user-885c29 06 June, 2024, 09:49:47

Hi there. We are running into a continued issue with our pupil labs. We are running Pupil Core eye trackers through pupil capture v3.5.1. The glasses are wired directly to a laptop. However, upon starting, the right eye camera won't connect. Error message says "no camera intrinsic available" (among other things).

This error is consistent across both of our pupil core trackers, and occurs for both our Windows and Mac laptops. Any help would be appreciated!

nmt 06 June, 2024, 10:08:42

Hi @user-885c29! Can you please open a support ticket and we can correspond there - we might need to exhange order IDs etc.

user-d1f142 06 June, 2024, 09:59:01

Hi @user-f43a29 , I tried this, it doesn't change anything. Disclosure: I'm the lead developer of Blickshift Analytics and I received the data from a customer. Our Software allows importing the Pupil Labs Surface Data into our Dynamic AOI Editor, and the Dynamic AOI Editor also allows transforming the gaze coordinates into the local coordinate system of the surfaces/AOIs. Surprisingly, if I do this, the result is correct, i.e. if I use the surf_positions together with the gaze_positions exported from Pupil Player, and do the transformation calculation on my side, I get the expected result, but it does not match the content of the gaze_positions_on_surface file.

From my side, I have solved the problem by telling our customer to use the Dynamic AOI Editor and import the surfaces. His deadline is today, so it won't make a difference if I can find the source of this problem, but I would still like to understand what is going on.

nmt 06 June, 2024, 10:06:19

Hi @user-d1f142! This is Neil stepping in for @user-f43a29. Are you able to share the recording in question? I.e. the full recording directory?

user-d1f142 06 June, 2024, 10:19:59

Hi @nmt , I have received permission from the customer to share the data. I will send a link via pm.

nmt 11 June, 2024, 02:45:55

Hi @user-d1f142! We reviewed the data and found why the issue arises. We can provide some concrete tips on to improve the set up for future recordings, but it's not going to be possible to get surface tracking data from those already made, unfortunately.

Explanation: For most of the tasks, none of the surface tracking markers are visible. Sometimes, one or two markers are in view, causing the surface to reduce to either a line or another erroneous polygon that doesn't encompass the area of interest. Additionally, the eye camera positioning is incorrect for some of the shared recordingsβ€”there are several instances where the pupil is not in view of the eye camera.

Steps for future recordings: 1. Ensure that at least three (ideally four or more) surface tracking markers are always in view of the scene camera to accurately define the area of interest 2. Double-check that the pupil is roughly in the centre of the eye camera field of view and that it remains visible during all gaze angles that will be recorded in the testing

For the existing recordings, the users might want to revert to manual annotation of the data, e.g. with the Annotation Plugin

We hope this helps!

user-885c29 06 June, 2024, 10:27:09

How do I do this? (sorry)

user-07e923 06 June, 2024, 11:12:32

Hey @user-885c29, you can create a support ticket in πŸ›Ÿ troubleshooting πŸ™‚

user-3c6ee7 07 June, 2024, 05:48:53

Hi @nmt , weirdly enough when I tried again today, the surface definitions were able to load in a few seconds! However, there are still a few recordings for reasons unknown, which are not able to load the surface definitions in a timely manner (left on a white screen and loading for a long time). Even though these recordings are not longer than the others.

user-3c6ee7 07 June, 2024, 06:15:33

What does it mean when Pupil Player says "surface gaze mapping not finished"? I get this message occasionally when trying to export fixation data

nmt 07 June, 2024, 06:32:03

I think it's also possible that the marker caches were nearly finished building when you reopened the recordings, but I can't say for sure. That message is literal. It means the surface mapping has not finished. It's recommended to wait for it to finish prior to exporting the data.

user-3c6ee7 07 June, 2024, 06:34:56

How do I know when surface mapping has finished? I am looking at the marker cache green progress bar at the bottom , which is fully green , but it still produces that message. Or does it throw that error if there are tiny blips in the marker cache (when the person briefly looked away and so there were no markers in their visual field in that brief moment) ?

user-fc393e 07 June, 2024, 18:31:28

@user-d407c1 Hi Miguel, I see that the ticket we had open has closed. Were you able to get that mp4 file working, or did it not work out? Best, Kieran

user-63b5b0 07 June, 2024, 19:29:23

Hello! I use lab streaming layer to record pupillometry from my pupil core and align to triggers from an exteral device. However, I want to record from pupil capture and analyze in pupil player as well. Do you know if I can record the pupil core video with lab streaming layer so I can align my data with triggers?

user-cdcab0 10 June, 2024, 07:25:43

Probably what you'll want to do here is to use remote annotations to send your events to Pupil Capture. They will be saved with the recording and visible in Player. Some sample code is available

user-d407c1 08 June, 2024, 12:14:56

Hi @user-fc393e ! Your ticket was close due to inactivity, as we did not receive a response for more than 10 days. We tried several times to communicate there, basically my colleague @user-cdcab0 found the issue and replied to you there on the 2024-04-10. I am recovering his response for you here:

Hi, @user-fc393e - the crash you're seeing appears to be the result of a single corrupt record in your gaze data. I have written a custom plugin for you which, when enabled, will apply a patch to the fixation detector. The patch simply tells the fixation detector to ignore corrupted gaze records.

  • Close Pupil Player
  • Save the attached file to pupil_player_settings/plugins/fixation_patcher_kieran.py
  • Start Pupil Player and enable the "Fixation Detector Patcher" plugin
  • Enable the regular fixation detector plugin

@user-d407c1 already mentioned our list of best practices, but I'd like to also call attention to the Record Everything section. If this recording had included calibration like the recommendation, you'd be able to regenerate the gaze data post-hoc, which would have fixed the corrupt data.

fixation_patcher_kieran.py

user-fc393e 12 June, 2024, 16:16:31

Hi @user-d407c1, thank you for digging that up for me & sorry for the slow response. And thank you @user-cdcab0 for providing that patch.

The plugin works for detecting fixations, but I am still getting an error (either the same error, or just a frozen pupil player window) when trying to export those fixations. Is there any way to resolve the export (or another way to retrieve those fixations)?

And thank you for the info. I'll be sure to dig through the best practices a little more & implement those suggestions in our checklists.

user-94183b 10 June, 2024, 14:25:54

Quick question: Is this fisheye effect in the Pupil Core world view image normal for a 1920x1080 resolution?

Chat image

user-07e923 10 June, 2024, 14:30:16

Hi @user-94183b, yes, the fish-eye distortion occurs when you select 1920x1080 resolution for the world camera. You can learn more about the camera intrinsics estimation here. You can also learn how to undistort images of the fish-eye lens by following this tutorial.

user-94183b 10 June, 2024, 14:32:29

The links you provided are very informative. Thank you for your prompt reply.

user-d1f142 12 June, 2024, 08:35:53

Hi @nmt , I understand what you are saying (this is always an issue with this customer, I have talked to him repeatedly about this during prior projects - to no avail it seems ;-). But this is not the issue I am referring to. I am talking strictly about frames where the markers are recognized, the surface is tracked correctly and gaze data exists. Otherwise our software wouldn't be able to compute the correct surface coordinates, I only used data imported from Pupil Core for this. I have attached a screenshot. On the left is the video frame. You can see that the gaze point embedded int the video matches the gaze point overlaid by Blickshift Analytics exactly (blue circle). In the middle, the gaze data is mapped onto a screenshot of the surface (ignore the embedded gaze data, that's an artifact of the screenshot). The blue circle is the surface-local gaze point computed by BSA, using gaze data + surface data. As you can see, it matches the location on the video. The red circle is the surface-local gaze point exported by Pupil Core, for the same timestamp.

Chat image

user-f43a29 12 June, 2024, 11:02:54

Hi @user-d1f142 , I'm stepping in briefly for @nmt on this message.

It seems the surface definitions somehow became corrupted across all of these recordings. Perhaps something went wrong when using Pupil Player to make the initial exports, but I am not 100% sure what exactly happened.

I tried the following and got results that now look sensible:

  • If Pupil Player is open, then first close it.
  • Make a backup of the data.
  • In the recording folder, remove (or rename) exports, offline_data, square_marker_cache, surface_definitions_v01, surfaces.pldata, and surfaces_timestamps.npy.
  • Start Pupil Player and load the recording.
  • Restart Pupil Player with default settings.
  • Remake the surface and now export the data.

I have attached a quick plot of what I now get out.

Could they try this on their end and see if that fixes it for them?

Chat image

user-dfcad3 12 June, 2024, 08:39:56

hello i wanted to know where can i find the video i just recorded with my pupil to drag it in the pupil player

user-07e923 12 June, 2024, 08:51:01

Hi @user-dfcad3, thanks for reaching out πŸ™‚ If you're on windows, you can find the recordings on C:\User\Username\recordings.

user-d1f142 12 June, 2024, 08:39:56

As you can see, it matches neither the point we compute, nor the video data. This is what I cannot explain. The red line in the line graph is the y-coordinate of the surface coordinate output by pupil labs. As you can see, it is almost constant, very close to 1. The light green one is the same coordinate computed by BSA. If it was a problem with the base data, how would I be able to compute this coordinate?

user-dfcad3 12 June, 2024, 08:55:06

yes it worked thanks a lot

user-d1f142 12 June, 2024, 16:06:04

Hi @user-f43a29 , I have just tested it, and this indeed works. Thanks for the help. I would still have liked to understand what is going on (especially since the customer has about 40 data sets where the surfaces result in these incorrect exports), but I'll know what to do, if this happens again.

user-cdcab0 12 June, 2024, 17:06:51

Hi, @user-fc393e - you're quite welcome! I still have a copy of your recording and the export works fine for me. Are you on the latest version of Pupil Player? Are there any meaningful messages in the log file?

user-d407c1 13 June, 2024, 06:31:36

Hi @user-311764 ! On the pupil_positions.csv you can find a column named method which tells you what detector was used, on each row.

Onto gaze, you can select which one is used for gaze estimation (see choose the right gaze mapping pipeline). Note that if you followed our best practices and recorded the calibration, you would be able to run it the gaze mapping post-hoc and choose a different pipeline.

user-dfcad3 13 June, 2024, 07:56:41

hello i need to do a presentation for students and i wanted to know if there is some games that are playable with an eye tracker.

user-07e923 13 June, 2024, 08:08:50

Hi @user-dfcad3, we don't have solutions for games that are playable with eye trackers. However, you might want to check out this community github page that utilizes Pupil Core's pipeline for human-computer interaction. Perhaps some of the projects might be interesting for you to demo to your students.

The files on that page are completely open-source. Feel free to tailor it to your use case.

user-dfcad3 13 June, 2024, 08:11:58

ok thanks you

user-dfcad3 13 June, 2024, 09:14:51

is it possible to control the mouse with the pupil core or pupil invisible ? if so, how to do it please

user-07e923 13 June, 2024, 09:25:59

You can take a look at this project that uses Pupil Core and a headtracker to control the cursor. As for Pupil Invisible, you'll need to adapt this Alpha Lab's experimental tool with the real-time API to achieve some form of gaze-contingent mouse cursor control. Please note that while the tool written in Alpha Lab is intended for Neon, it should work with Invisible.

user-75e0ea 13 June, 2024, 10:38:27

Hi, I am using the Pupil-Invisible and the Pupil-Core for my study. Unfortunately, there were problems with the recordings with the Core glasses, which I only found out when I analysed the recordings. The situation is as follows: The settings and calibration worked and the fixations can be seen at the beginning, but as soon as the test person starts riding his bike, the pupils are barely visible due to the sun and there are no more fixations on the images. How can I avoid overlighting? What exactly do I have to set? I made the settings and the recordings on my laptop. is there also a mobile phone app for the Core that I can use? With the Pupil-Invisible and the matching app, the recordings work perfectly under the same conditions. I need to use two pairs of glasses as I have to take two recordings at the same time. Is there any way I can edit the recordings that have already been taken so that I get the fixations? I need the recordings for my dissertation.

nmt 13 June, 2024, 10:50:11

Hi @user-75e0ea! It sounds like the eye images recorded by Core are over-exposed. This can happen in direct sunlight. To mitigate this, you would need to set the eye cameras to auto-exposure mode in Pupil Capture. But it's not guaranteed to work in all cases. To determine if they are recoverable, we'd need to see an example. Can you share one of your recordings with [email removed] Also note that Pupil Core needs to be tethered to a laptop/desktop computer for operation.

Invisible doesn't have this issue as autoexposure is always on, and moreover, it uses a deep learning approach to gaze estimation that doesn't rely on traditional pupil detection. That's why it will have worked well even in the direct sunlight.

user-fc393e 13 June, 2024, 14:44:51

This is the error I'm getting

Chat image

user-cdcab0 13 June, 2024, 22:52:46

Do you mind opening a support ticket in πŸ›Ÿ troubleshooting?

user-fc393e 13 June, 2024, 14:48:14

This is what I get if I don't run surfaces

Chat image

user-cdcab0 13 June, 2024, 23:45:57

I went down a similar path looking for games that are playable by mouse and then controlling the mouse via gaze. Here's a little "shooter" horde game I made and played while we were at a conference a couple of weeks ago. It's played using a Neon and the real-time screen gaze package

user-11f244 21 June, 2024, 11:28:24

This looks like so much fun! I would love to try it!

Also back in the days I built something similar built with EyeGestures (opensource eye tracker of mine) running on web: https://eyegestures.com/game

it is far less robust that what you are showing, but it is funny as this was first type of game I wanted to implement.

user-cdcab0 13 June, 2024, 23:46:49

Also with Neon and the real-time-screen-gaze package, here's gaze-controlled Fruit Ninja

user-dfcad3 14 June, 2024, 08:15:21

oh thanks, do i need to install some python modules to use it or I just need to run the codes ?

user-cdcab0 14 June, 2024, 08:17:04

For Core, you will need to configure a surface on your screen with Pupil Capture. Then, using Python, you can read the gaze coordinates from Capture and use that to control the mouse

user-dfcad3 14 June, 2024, 08:22:48

so i don't need to use the real-time screen gaze that you used for neon for the core ?

user-cdcab0 14 June, 2024, 08:27:28

Correct. The real-time-screen-gaze package essentially provides for Neon the same functionality that the Surface Tracker plugin provides for Core

user-dfcad3 14 June, 2024, 08:43:32

ok thanks i will test it and come back i have some issues

user-5a4bba 18 June, 2024, 13:42:51

Hi! I'm trying out post-hoc calibration with manually identified referents to try to correct for some slippage we saw over our experiment. Is there a way to check that this is having a positive impact on fixation estimates? I can see fixation detection number before and after the post-hoc calibration, but I'm not sure if there's a better way to determine if the referent calibration is actually helping? Or to quantify how much it may be helping (e.g., more so for some participants than others)?

user-480f4c 18 June, 2024, 15:50:01

Hi @user-5a4bba! I understand that you are comparing fixations that were detected during the recording (online fixation detection in Pupil Capture) vs. fixations detected after the post-hoc calibration using the offline fixation detector in Pupil Player, is that right?

In that case, the two sets of fixations are not directly comparable, and the post-hoc calibration is not the only factor that might explain differences between the two. Keep in mind that although both online and offline detectors implement a dispersion-based filter, these have slightly different implementations, so online results won't always correspond with offline results. This can explain different fixation counts. You can read more about that here

To quantify your fixations' data quality you could look at the confidence value that is provided for each fixation. See also this relevant message: https://discord.com/channels/285728493612957698/285728493612957698/1117708961697759282

user-24c29d 20 June, 2024, 15:57:54

Hi! I would like to know if there is a simple solution to automatize the export of fixation data with fixed parameters (i.e. dipersion, min/max duration). Thanks in advance for your help!

user-f43a29 22 June, 2024, 18:48:36

Hi @user-24c29d , although Pupil Player does not perform batch exports of data, you can check out the community contributed plugins, where there is a third-party batch exporter that you could try.

user-cdcab0 23 June, 2024, 16:09:42

Pretty sure I've seen your posts on reddit!

user-11f244 23 June, 2024, 16:46:36

Very likely!

user-11f244 23 June, 2024, 16:46:47

I post a lot on my project recently

user-698509 24 June, 2024, 19:17:37

Hi, does Pupil CORE works properly with Mac? I am getting calibration errors.

Please get back to me.

nmt 25 June, 2024, 03:19:01

Hi @user-698509! Yes, Pupil Core works properly with Mac. Can you please elaborate on what you mean by calibration errors? It would actually be helpful if you could share all the steps you've taken to set up the Core system prior to initiating a calibration. Screen shots/recordings are extra helpful!

user-4c48eb 25 June, 2024, 09:12:27

Hello all, I am trying to do a study using the surface tracker plugin. In the study we give the subjects 15 images with four markers at the corners of the image. Each image has to be analyzed on its own, but to make things easier we thought of using the same markers for each image since between two consecutive images there's a quiz with no marker so we use the surface_events file to know when an image is displayed or not. However, we've seen that the surface recognition is not always accurate as you can see in the screenshots from pupil player. Could it be because the size of the images are different or is it something else? I can see the camera is not properly on perpendicular to the monitor, could that be it? We have also turned down the aperture time since the monitor was too bright and the camera had difficulties finding the markers for the calibration

Chat image Chat image Chat image

user-cdcab0 25 June, 2024, 09:35:19

Hi, @user-4c48eb - some notes:

  1. Yes, the surface mapping is being warped because you've anchored the tags to the corners of the images, and those images are different sizes. When you define a surface, that definition consists of the relative sizes and positions of the markers with each other. Here, you're changing the relative positions of the markers with each other, and that needs to stay static. Alternatively, you could either anchor the tags to the corner of the screen (instead of the individual images) or pad your smaller images so that they are all the same size.
  2. The angle of the camera relative to the surface is fine. This only becomes a problem when the angle is so steep that the markers cannot be detected, and you're not even close to that in these images.
  3. I wouldn't rely on surface visibility events. If, for whatever reason, the surface isn't detected in the camera (participant looks elsewhere, bad lighting/contrast (even momentarily), too much motion blur, etc), then you'll have a surface event that doesn't correspond with the stimulus event it's supposed to. You might be fine, but I would certainly be careful to review all the data before using this method.

A similar (but less error-prone) method which also doesn't require you to manually annotate and doesn't require separate surface definitions for each image would be to define a surface for the quizzes (using a different set of apriltag markers from the stimuli, but each quiz can use the same markers/surface as each other). That way, instead of looking for the disappearance of the stimulus followed by its next reappearance, you'd then be looking for the appearance of the quiz surface followed by the appearance of the stimulus surface.

user-4c48eb 25 June, 2024, 09:50:55

Thank you @user-cdcab0 for the clarification and the quick response. I understand your last paragraph, but I need to track the gaze on the image. How can I use the surface on the quiz to see where the participant is looking on the image?

I'd also like your input on whether I should anchor the images to the corners of the screen or not. I'm looking to do an analysis like the one attached, but I'm not sure I could do it with the markers at the corners of the screen as the dimensions of the image depend on the dimensions of the screen itself. So, I think the best option would be to create different surface definitions for each image. That way, I wouldn't rely on the surface visibility event, but I'd have a separate file for each surface. What do you think?

And one last question: luckily the recording was pretty good, so all the surfaces are tracked correctly. To get accurate data from this recording, do you think I could manually export the data from the frames where only one image is present, deleting the previous definition of the surface and re-adding each time?

Chat image

user-cdcab0 25 June, 2024, 10:18:39

How can I use the surface on the quiz to see where the participant is looking on the image? You wouldn't. I only suggest the quiz-surface to provide more reliable surface event sequences in order to know which stimulus they are on. You will still need to define a surface on the stimuli themselves.

the dimensions of the image depend on the dimensions of the screen itself I'm not sure I understand this, but I'd like to re-iterate one of my suggestions. Determine the height of the tallest image you have and the width of the widest image you have. Then add a black frame to all of your images so that all of them are as tall as the tallest and as wide as the widest. Now all of your images are the exact same size and you can display your AprilTag markers just as you are now (anchored to the corners of the images) without having to worry about surface warp.

I think the best option would be to create different surface definitions for each image This certainly is a solution that would work, but I assumed you had a reason for avoiding it - like if there are many stimuli, defining a separate surface for each one could be very tedious.

do you think I could manually export the data from the frames where only one image is present, deleting the previous definition of the surface and re-adding each time? This would work too, but again, sounds very tedious to me.

user-4c48eb 25 June, 2024, 10:20:51

Thank you very much, this was very helpful.

user-8e1492 26 June, 2024, 10:41:32

Hi there, I am researcher using the pupil core glasses. I am building a unique set up for a research project, and need to find a way to connect the cameras to USB. I am hoping to buy a connector/cord where one side of the wire should be USB and the other should be the type of connector suitable for these cameras. I have tried buying a few different USB to JST (I think that's the connector to the camera?) cables, but the camera connector is the wrong size each time. I'm attaching photos of what I am talking about. Is it possible to get the specs for that camera side connector, or do you possibly sell a USB to camera connector solution? Thanks so much!

Chat image Chat image

user-f43a29 26 June, 2024, 10:55:35

Hi @user-8e1492 , Rob here πŸ˜‰ I hope all is good πŸ˜„

Please see the next message below for the details

user-8e1492 26 June, 2024, 10:56:57

Hi @user-f43a29 Fancy seeing you round these parts πŸ˜‚ Thanks so much! This is super helpful. Hope you are doing well!

user-f43a29 26 June, 2024, 11:08:51

@user-8e1492 my apologies. The connection needs to be JST type SH (1.0mm), 4 pin variety.

You can of course order your own, but you can also order these direct from us. We cannot make guarantees about 3rd party cables.

Just send an email to info@pupil-labs.com saying that you'd like to purchase JST to USB cables for Pupil Core.

user-14a4fc 26 June, 2024, 15:51:34

Hey all, I'm with TeamOpenSmartGlasses & we're looking into eye+pupil tracking as a means of detecting the wearer's focus.

Does anyone have an assembled (or partially assembled) Pupil Core DIY for sale? Would like to save time building one, if possible.

P.S. Sorry if this is the wrong channel to ask!

user-5c56d0 27 June, 2024, 03:33:45

Excuse me.

Is there a way to use Pupil Core in WINDOWS11?γ€€Last year, the application did not work in WINDOWS 11.

user-07e923 27 June, 2024, 03:54:29

Hi @user-5c56d0, thanks for getting in touch πŸ˜„. Pupil Core's software is working with Windows 11 and my system is also windows 11.

Please try installing the software again, and open a ticket in πŸ›Ÿ troubleshooting if the application didn't work for you. We'll try to guide you through some debugging steps if that happens.

user-5c56d0 27 June, 2024, 03:59:13

Thank you for your reply.

user-0b4995 27 June, 2024, 09:12:50

Hello, when trying to export a failed recording with Invisible using the Pupil Player, I get the following error message: "NumPy boolean array indexing assignment cannot assign 155640 input values to the 264126 output values where the mask is true" Is there anything I can fix to make it work?

Chat image

user-f43a29 27 June, 2024, 09:18:17

Hi @user-0b4995 , the main error that caused the crash is above that one. It says that eye0_lookup.npy is missing. By "failed", you mean the software crashed during the recording?

user-0b4995 27 June, 2024, 09:18:51

yes it crashed and we copied the data off the phone

user-f43a29 27 June, 2024, 09:19:02

Also, since this is respect to Invisible, I hope you don't mind that I will move these messages to the πŸ•Ά invisible channel

user-0b4995 27 June, 2024, 09:19:22

sure no problem! any way I can solve the issue?

user-f43a29 27 June, 2024, 09:21:09

I will check with the others. What data do you need exactly? You want to recover all data that was recorded until the point of the crash or you want to just recover a subset of data streams, like gaze+world video?

Also, have you uploaded this recording to Pupil Cloud? What is its status there?

user-0b4995 27 June, 2024, 09:23:12

Well I just copied the eye1_lookup.npy' and eye0_lookup.npy' from another recording and it seemes to work now!. Skipping through the files and the gaze looks good

user-f43a29 27 June, 2024, 09:27:20

Please be aware that this is not recommended practice. Mixing data from recordings can lead to unexpected results and potentially incorrect conclusions.

user-0b4995 27 June, 2024, 09:28:58

correct! i will check the export but since we code this one manually and the gaze seems to make sense I suppose this should work in this case. thanks for your help!

user-f43a29 27 June, 2024, 09:30:03

I will still check and come back with a potential solution/explanation.

user-0b4995 27 June, 2024, 09:31:11

thanks rob!

user-14a4fc 28 June, 2024, 00:17:29

Some quick DIY Core questions:

  1. For the exposed and developed color film negative or Edmund’s 5-mm-diameter filter part, is this the correct part?: https://www.edmundoptics.com/p/5mm-diameter-optical-cast-plastic-ir-longpass-filter/41541/

  2. I read somewhere that the accuracy of pupil diameter tracking heavily relies on your eye cam's resolution. Is the 720p Microsoft HD-6000 good enough for this, or do I need to modify the housing to accept another Logitech C615 (which is 1080p)?

Thanks!

nmt 29 June, 2024, 06:56:45

Hi @user-14a4fc πŸ‘‹ 1. That filter should work! 2. If you're using the Pupil Core pipeline for pupillometry, resolution isn't typically an issue. As long as you have a clear image of the pupil with good contrast between the pupil and the iris, it should work! You can read more about the pipeline in this section of the docs. There are also examples that show you what an eye image typically looks like. If you want to grab some actual eye cam videos, head to our website and download an example recording. Good luck with your build!

user-a79410 28 June, 2024, 06:00:29

Hi there, I was downloading your installation files for core on Linux Ubuntu 22.04 LTE. Installed it with Gdebi package installer and try to run via command line. However, I run into an error.

user-a79410 28 June, 2024, 06:00:52

/opt/pupil_capture/libpython3.6m.so.1.0(+0x160a84)[0x7efd67160a84] /opt/pupil_capture/libpython3.6m.so.1.0(PyEval_EvalCodeEx+0x3e)[0x7efd6716106e] /opt/pupil_capture/libpython3.6m.so.1.0(PyEval_EvalCode+0x1b)[0x7efd6716109b] /opt/pupil_capture/pupil_capture(+0x372c)[0x55d7d5b8c72c] /opt/pupil_capture/pupil_capture(+0x3b1f)[0x55d7d5b8cb1f] /lib/x86_64-linux-gnu/libc.so.6(+0x29d90)[0x7efd67600d90] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)[0x7efd67600e40] /opt/pupil_capture/pupil_capture(+0x24fa)[0x55d7d5b8b4fa]


cysignals failed to execute cysignals-CSI: No such file or directory

Unhandled SIGSEGV: A segmentation fault occurred. This probably occurred because a compiled module has a bug in it and is not properly wrapped with sig_on(), sig_off(). Python will now terminate.


user-f43a29 30 June, 2024, 22:17:10

Hi @user-a79410 , instead of running it from the command line, can you try the following:

  • Open the "Activites" overview of Ubuntu. Either click it in the top left corner or press the Windows button on your keyboard
  • Start typing "Capture". You will see the icon for Pupil Capture
  • Do not immediately click on the icon. First, right click the icon and see if there is an option "Run with dedicated graphics card". If so, then click that. If not, then just start Pupil Capture

Let us know if that improves things. In addition, I have the following question:

Are you on a laptop with an Nvidia graphics card?

user-a79410 28 June, 2024, 06:01:31

Could you please advise if you have other install instructions? or how to overome that error?

user-a79410 28 June, 2024, 06:01:40

many thanks in advance.

End of June archive