Hi ! I would like some information. Is the HTC VIVE Binocular Add-on compatible with the XR Elite Headset ?
Hi @user-5562d7 ๐๐ฝ ! Our add-on is only compatible with the HTC Vive, Vive Pro, and Vive Cosmos. However, you could opt for a de-cased version of Pupil Core (cameras & cable tree), but you would need to your VR headset. If you are interested, please reach out to [email removed]
Alternatively, VR/AR prototyping is possible with Neon (https://pupil-labs.com/products/neon/) + our โBare Metalโ kit (https://pupil-labs.com/cart/?ne_bm=1), but this is not a turnkey solution. Mounts would need to be prototyped to install it according to the geometries and constraints of the headset. We have decided to open-source the nest and Neon module .step files to assist users in this process (see here: https://github.com/pupil-labs/neon-geometry).
thanks ! If I purchase one or multiple add-ons from you, I need to calculate the customs fees since I live in Canada (Quebec). To simulate the amount of taxes, I need to provide the type of material. Among the choices are: "Cameras", "video consoles", "video games", "printers", "software", "computers", "batteries", "tablets and e-readers", "phones", "televisions/projectors/monitors". In which category do VR headset add-ons fall? Thank you.
Could you please contact sales@pupil-labs.com in this regard?
OK; Thanks!
Dear PupilLabs Team,
I hope this message finds you well. Our names are Noah and Layne, and we are working on a research project on enhancing road safety through eye-tracking technology.
We are interested in knowing if PupilLabs is interested in exploring the possibility of incorporating their eye-tracking solutions into our research. Could we connect with a representative to discuss potential collaboration opportunities and pricing details?
Thank you for your time, and we look forward to your response.
Layne and Noah
Hi @user-16d3e2 and Layne๐๐ฝ ! I understand you also contacted us via email, right? If so, we will reply there as soon as possible!
Dear PupilLabs Team, My name is George and I have very stupid yet simple question about Pupil Core Open Source Code. How do I run it? I have read some instructions about the virtual environment required, yet I am unable to run it from the source. Are there other sufficient instructions that I am unable to find? Thank you
Hi George! Is there any specific reason why you would like to run it from the source code rather than the pre-compiled version? Otherwise, could you please detail where are you stuck and what Operating System do you have?
Hi, I wonder if the Pupil Core uses Infrared Oculography or Video-Oculography๏ผVOG๏ผ๏ผ
Pupil Core uses infrared light to illuminate and capture images of the eyes
Hi, I have a Pupil Invisible and I'm trying to export my recordings. I don't have the gaze.csv in my exported files. Is this something I have to generate manually?
Greetings, @user-bda2e6 ! Do you have the Raw Data Exporter plugin enabled? (https://docs.pupil-labs.com/core/software/pupil-player/#raw-data-exporter)
Hi, I'm trying to stream the video and data from a raspberry Pi to another computer. The whole system was developed and worked perfectly before, but it sat for months without running, it failed to stream videos. I could still start pupil capture from the other computer, but just no videos were received. Please see the image attached.
Any tips would be greatly apperiacted!
Hi developers. Thank you for developing this amazing peice of hardware. We are integrating this eye tracker in our driving simulator. In a very plain language, this eye gaze data is used to manipulate the user interface on the windshield of the self-driving car. We have defined multiple area of interest (i.e., surfaces for gaze mapper). The data retrieved has a lag of couple of second. As a result, there is a lag in the user interface. We have a high performance PC: 64 gb ram, RTX 3080, Intel I9 12 core, and 10 GB graphics card. Is this a performance issue as a result of not up to mark hardware we have, or is it because we have define multiple surfaces (apx 8 surface) leading to a lag in data being sent to the subsribers? Let me know if you have any questions (ping me). I would really appreciate your response. Edit: we have 14 QR code trackers in our environment Edit: I am running the pupil eye tracker from source code and we have made a little modification to the data type of data being sent (only for a specif surface) to the subsribers. We are aware that this could lead to performance decrement, but we think it is not the major contributing factor. Edit: I have overlapping surfaces too
Sorry for pinging you like this @user-cdcab0 , are you able to determine the cause? We are actually stuck (for days) in development.
Hi, my name is Ken. When i transfered recordings via USB, how i can get "Gaze Overlay" movie (mp4 file) as showed in Neon Companion applications.
Hi @user-b03a4c ๐๐ฝ ! Just to clarify: are you using Pupil Core or Neon?
The driving simulator software we are using btw, is very expensive in terms of computation.
Have you examined the CPU load and RAM usage while running both Pupil Core and the driving sim? If you're maxing out the system, you may consider moving Pupil Core to a separate machine and streaming the data over the network
Thank you for your response. I will get back to you when I go back to my research lab and confirm if that is the case.
Hello
I have a question about the pupil labs eye camera. We want to connect it to an MCU, and we were wondering what cable it uses for data transfer?
Hi @user-b1befe! Could you describe your set up in more detail? E.g. do you want to connect a single eye camera to your MCU?
Hi . I would like to use a USB extension cable to extend the length of the Pupil core data cable. Is this okay? If possible, are there any recommended USB extension cables? thanks
Hey @user-4bc389! Yes, this is possible using active USB3.0 hubs as repeaters. With this approach we have achieved cable lengths of around 8 meters. I don't have a particular brand to recommend, but I'd probably go for something of reasonable quality.
Hello Pupil team, recently we ran into several cases where Pupil Capture keeps detecting tiny "pupils" on participants eye lashes and eye lids. (See pics) The problem is that fake pupils can be detected even when the real pupil is visible. We initially thought that it could be women's mascara confusing the algorithm, but we've observed this in cases where the participant did not apply make-up. Note: the experiment environment is a low-light environment (as we want to measure pupil dilations) and the computer screen is the brightest source of light. I tried to change ROI but it does not always work. Can you give us some advice on how to overcome this? Thanks.
Looks like this participant is wearing mascara, and the only work-around for that is not to wear it (or eye makeup in general)
We are considering a Raspberry Pi MCU.
For the sake of clarity, the Raspberry Pi is not an MCU. It's a single-board computer (SBC). It consists of a CPU, RAM, and storage - like a very small PC. As such, these types of devices run typical mainstream operating systems (like Linux, Android, Windows, etc). These also typically have similar I/O capabilities as desktop PCs, like USB, HDMI, ethernet, etc plus sometimes general purpose I/O (GPIO) and bus interfaces like SPI, I2C, etc
Microcontrollers (MCU) are a separate class of devices. They typically have significantly less memory and do not run full-blown operating systems - instead only running a solitary program. For connectivity, these typically are limited to GPIO and bus interfaces like those mentioned previously - so usually no USB, no HDMI, no ethernet, etc.
The Raspberry Pi
is a single-board computer (SBC), and the Raspberry Pi Pico
is a microcontroller (MCU)
Hi Dom, thank you for the clarification. If i were to have an MCU, how would I connect the eye cameras to it?
More specifically, what kind of wire connection does the eye camera use to be connected to an MCU or SBC?
Core eye cameras + MCU
Hi @user-5346ee! To expand on my colleagues answer, mascara can influence the tear film, exacerbating reflections that appear on the pupil images. This can negatively impact on pupil detection.
That said, there does seem to be decent contrast between the iris and pupil in the image you've posted.
Would you be able to share an example recording with us such that we can provide more concrete feedback? You can do so via [email removed] Please share the entire recording directory.
Will do later
I am still a little confused about this, I do not have a gaze_positions.csv, only a gaze.pldata which I see that there is the "gaze_point_3d" and "eye_center_3d", are these the same as the gaze_point_3d_x and etc described in https://docs.pupil-labs.com/core/software/pupil-player/#raw-data-exporter?
and this gaze_point_3d_z - z position of the 3d gaze point is the optical axis?
Ah, it sounds like you have a recording, but haven't exported data from it (https://docs.pupil-labs.com/core/#_8-export-data). In short, open your recording in Pupil Player, enable the Raw Data Exporter plugin, and then hit the export button. This will make a new folder inside your recording named "Exports", and inside that a numbered folder for each export you do. You'll find the exported data much easier to work with than the .pldata
files, which are really just an intermediary format.
The camera intrinsics file is just an msgpack-ed dictionary:
import msgpack
file_path = 'path/to/world.intrinsics'
with open(file_path, 'rb') as file_handle:
world_intrinsics = msgpack.unpack(file_handle, strict_map_key=False)
print(world_intrinsics)
also how do I open the world.intrinsics file? I assume this is where the camera intrinsics are located
Hello Pupil team, after analyzing some pupil invisible recordings and exporting them into iMotions, it has come to our attention that the gaze maps are off centered for many of the recordings. There is nothing in eye motion that can be done to fix this, but I was wondering if there was a way to correct the off centered data in the original recordings in pupil cloud? Is there a specific feature that would let me pull off the gaze map from the videos or alter the positioning of the gaze map on the video? Thanks!
why this is popping up ? my MacOS version is Sonoma 14.0
Hi, @user-870276 - for technical reasons, Pupil Capture on MacOS 12 and newer has to be launched with elevated privileges. More information and instructions are here: https://docs.pupil-labs.com/core/software/pupil-capture/#macos-12-monterey-and-newer
thank you so much @user-cdcab0
Hi @user-6072e0 ! One way is to use the surface output image ( the detected one, to perform the OCR) , effectively maintaining the same coordinates. Ainโt that an option for you?
Oh sorry, I didn't know there was a surface output image hahaha ๐
๐
.
I have defined the surface and recorded it like the image bellow, but there is no gaze_positions_on_surface_<surface_name>.csv
file in the recording folder. How to get it? Thank you for answering
setting with movement involved
We are analyzing eye-tracking data that was recorded using your AR/VR add-on for a VR simulation users are playing through an HTC Vive Cosmos. We want to be able to generate heat maps using the surface tracker plugin in Pupil Player but we are having difficulty enabling that feature. Specifically, it says we cannot add surfaces. Are we doing something wrong? Can we get more insight into how to generate heat maps using Pupil Player from a pre-recorded dataset?
Hi @user-b44c7c ๐ The Surface Tracker plugin (https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking) in Pupil Player needs the presentation of apriltag markers on the scene in order to define a surface and generate a heatmap. First thing first, did you have these apriltag markers in your VR scene? And if you did, how many of them did you use?
@user-c2d375 I'm with Julianna responding to your query above. We did not have Apriltag markers in the VR scene. How do we set these tags up? Is it something that can be done only before the recording?
Hi @user-eb9296 ๐ Yes, if you intend to utilize the Surface Tracker plugin and define AOIs within your VR scene, it's crucial to position those markers in the scene prior to starting the recording. Unfortunately, it is not possible to add those post-hoc. For detailed instructions on accessing the apriltag markers collection and preparing your environment to use this plugin, please refer to our documentation (https://docs.pupil-labs.com/core/software/pupil-capture/#surface-tracking).
Hi, I have a technical issue with the device. The eye tracker seems not connect to the apps and doesn't show the display for the glasses and no data were collected.
Hi, @user-29f76a - if you're on a Mac, you'll need to run Pupil Capture with elevated privileges. See https://docs.pupil-labs.com/core/software/pupil-capture/#macos-12-monterey-and-newer for more information. Otherwise, please tell us a little more about your setup (what operating system, what version of our software, have you used it successfully in the past, etc)
i use the phone that comes with eye-tracking glasses (Neon Module and Frame). But the thing is, the glasses cannot connect to the phone @user-cdcab0
like this
Hi, while collecting the data with the phone provided, it shows up an recording error and I think it seems the file limit almost reach the limit of the phone provided. Could I remove the files manually in the phone? Would files already uploaded in the pupil cloud still stay after I manually remove that file in the hone? Thank you for your help in advance!
Hi @user-b5a8d2 ๐. You can of course delete files from within the Companion App. If they're already uploaded to Pupil Cloud, deleting them from the phone will not delete them from Cloud. It's worth double-checking they're safely backed up in Pupil Cloud first, though!
Is there any documentation on how to add events in the pupil cloud? I can't seem to see how you move the start and end markers. Also, how do you add several event markers to the timeline? thank you for any advice.
Hi Team! I have multiple gaze mappers (calibrations) in one video. So when Iโm choosing the most accurate gaze mapper for my experiment among with them, do you recommend me to go for the one with the least accuracy result in post-hoc validation or the gaze mapper (calibration) that is temporally the most approximating to my experiment section in the video (so that less change could potentially happen between them, e.g., glasses slippage)? Can you help me please? Thank you very much in advance!!
Does Pupil Core require that the source of IR illumination is from the LED on the glasses? In other words, would the eye tracking still work if an external IR light was used instead? I figure that traditional eye tracking algorithms that detect the angle between the pupil and the glint from the IR light wouldn't work, but I don't know enough about the Pupil Core algorithm to know if the same problem would occur
Hi @user-edef2b ! Pupil Core uses IR illuminators that run at a peak wavelength (ฮป) of ~860nm (centroid 850nm ยฑ 42nm ฮฮป). These IR illuminators are designed to illuminate the eye region and help capture black and white eye images that are used for dark pupil detection. The purpose of these illuminators is to provide enough IR light to the eye so that the pupil detection algorithm can work effectively. You can also use external IR sources as long as they are not filtered by the cameras. If you want to learn more about the algorithm employed, you can find here the paper: https://www.perceptualui.org/publications/dierkes18_etra.pdf
Additionally, what wavelength of IR does the illuminator produce?
Hi Lency,
Happy to help. We could organise a meeting this week and I'll run through my steps?
Hi Dregz. That's really cool! It depends on your schedule. I'm free all week :D.
Hello everyone! We are having problems when calibrating... Now it cannot find the "stop" mark at a distance of two meters, has anyone had the same error? Thank you
Hi @user-4514c3 ! For the markers to be detected robustly, they needs to be of an appropriate size (meaning visible by the scene camera at that distance) and well lit (no shadows casted over the marker).
That said, the markers included within Pupil Core' s package are meant to be used at 1 to 2.5 m. And if the calibration marker is detected, I do not think the illumination is a problem.
Might it be that the marker is not well printed? You can try and print one here https://docs.pupil-labs.com/core/software/pupil-capture/#calibration-marker
Anyway, feel free to make a recording with the calibration, and share it with us at [email removed] so we can give you more feedback.
PS. You can also stop the calibration by pressing "C" on your keyboard.
@user-d407c1How should I disassemble the Pupilcore, it looks like the cables are fixed inside the skeleton.
Hi @user-15e3bc ! Why would you like to disassemble Pupil Core? We can offer you a de-cased version if you would like to prototype
Sorry, I didn't think about integrating with other AR devices at the time of purchase. Is it easy to provide the connection for the de-cased version then? Or can the cable be supplied separately?
I mean the purchase link of the de-cased version
Please contact sales@pupil-labs.com to get a quote for a VR de-cased version.
Okay, thank you ๐
Hello everyone,
I have a question regarding testing the pupil tracking system on a subject. After performing the Screen Marker calibration, I observed the following values: angular accuracy - 2.8 and precision - 0.61. However, even after this calibration, the tracker was unable to properly recognize the pupil. I'm wondering if this issue might be related to the pupil dilation, as one of the subject's eyes is more dilated than the other.
Hi @user-870276! Would you be able to share an example screen capture such that we can provide feedback?
Hi, I have a question about the eye tracker method. The eye-tracking method is Infrared OcculoGraphy-based or video-based. Did the Pupil Core use a Infrared light source๏ผ
is it based on the principal that, if a fixed light source is directed at the eye, the amount of light reflected back to a fixed detector will vary with the eyes position?
Hi @user-8a20ba ๐. Pupil Core is video-based. We use IR emitters to illuminate the eye region, which is recorded by IR spectrum cameras, and we employ a 'dark pupil' detection algorithm to segment and track the pupils. Read about it here: https://arxiv.org/abs/1405.0006
yes, I suspect the size of the pupil there is causing an issue. Have you tried increasing the expected pupil size in the 2D detector settings? You can get to that in the eye window settings menu.
Hey Neil! Thank you so much! Will try the setting and let you know, and are there any constrains when using the pupil core for low vision or people with defective eye? And which device you suggest me to use in this senario as I have (Pupil Core & Invisible).
It actually depends on how you intend to use the systems and what data you're looking to obtain. It might be helpful if you could briefly elaborate on your research aims and outcome metrics of interest. Then I can try to point you in the right direction!
๐ค. I don't see a gaze circle in that screenshot. Just to clarify, your recording does contain gaze data, right? If not, that would explain the empty surface tracking export.
I believe there is a gaze circle when I tried yesterday, but maybe it's a coincidence that it's not in the screenshot ๐
.
What do you mean by gaze data in my recording are gaze_timestamps.npy
and gaze.pldata
right? Apart from that, I can't find the gaze_position_on_surface
file, not that the file exists but is empty.
Just to make it clear, I just need to press the R button to record or maybe there is another button from the surface tracker to record and get the file export?
Do the cameras each work individually? If they do, and if your computer has more than one USB controller, you might have success putting them on different busses.
Ok I will try that ๐
Hey folks. I believe I remember there is a way to force a monocular gaze map for individuals with strabismus, or who just have a bad track with one eye. Can someone please share that script?
Hey @user-8779ef! Here is the dual monocular gazer: https://gist.github.com/papr/5e1f0fc9ef464691588b3f3e0e95f350
What is the intensity of the IR illuminators on Pupil Core (in either W/sr or W/cm^2 on the eye)?
Hello, I am working with a team on an academic project designing a DIY eye tracking system. We have come across the 120 Hz individual camera for the Pupil Core and are curious if our team can get ahold of a data sheet for this camera to see if it would be beneficial for our application as opposed to the de-cased webcams used in Pupil Labโs DIY instructions.
Hi @user-ca4e2e ๐. The 120 Hz eye cameras are no longer available. However, we do sell a 200 Hz version, which is an upgrade to the 120 Hz predecessor. Are there specific specifications not listed on the website that you would like to know? You can use a single eye camera for pupillometry, or in combination with a scene camera for monocular gaze tracking. However, for the most robust gaze tracking, we recommend using two eye cameras.
Additionally, can you use a single 120 Hz camera or is it designed to be used in pairs?
We already did experiments on eye and hand tracking on normal people doing their daily activities. Now we wanted to do it on people with low vision or defective eyes to observe their gaze and fixations while doing daily activities which doesnโt involve drastic movements. In this process we are coming across subjects with bigger pupils or subjects having defective eyes and the Pupil core isnโt able to obtain the gaze and fixation data in these situations. So what would be a better solution for this?
Thanks for elaborating. Pupil size shouldn't have an impact on Pupil Invisible's gaze output. However, what do you mean by defective eyes?
Hi @user-edef2b! Check out this message for reference: https://discord.com/channels/285728493612957698/285728493612957698/1163435437445087322
Hi Neil! That message gave great information about the spectrum of the IR light, but I donโt think there was anything about the intensity/brightness of the illuminators. Do you have any info on that or know where I could find it?
I meant Irregular eyes or eye with cataract. And what do you suggest using for this application pupil core or pupil invisible?
To be honest, it's difficult to make a concrete recommendation here as there will be a lot of person-specific characteristics. Would it be feasible for you to try both systems on your participants? The time-cost associated with Invisible will be minimal since it's very easy to set up and use. Perhaps you could try it first and then Core if the output is not sufficiently accurate?
Hey, regarding Core and Capture, I'm currently refactoring / remaking the LSL plugin to work with surfaces as well. Upon debugging, I noticed that the gaze event list isn't ordered by timestamp. Though I realize that by being timestamped the problem is solvable on the receiver's end, I'd like to send the LSL samples in their correct order. Is there any reason they aren't sorted?
Or does this happen when there is generally low confidence?
Hi, @user-59ce16 - I actually have an implementation of exactly this which will be integrated officially after some testing and feedback. Please have a look! https://github.com/domstoppable/App-PupilLabs
Also, is there a way to access a plugin from another plugin? (not refering to the notifications)
Hello, can anyone recommend antiseptic wipes/ spray to clean the Core headset that is available in the UK? Thanks!
Hi @user-74fd00 ! Have you seen our recommendations for disinfecting Pupil Core? https://docs.pupil-labs.com/core/hardware/#disinfecting-pupil-core
Yes, thanks. However unable to find a product with a similar composition.
Hi, here is a link to recording that i made with pupil core. I wanted to know what would be the reason for poor confidence values and improper gaze detection https://drive.google.com/file/d/1CwJsn0FUMkMThFFA_K3dFHA3P7B-E-6m/view?usp=drive_link
Hi @user-870276. Thanks for sharing the screen capture - that helps. Poor pupil detection in this case is because the pupils are occluded. The full pupil needs to be visible for high-quality data. There's no way around that with the pipeline that Pupil Core employs. Perhaps you could make your experimental environment brighter such that the pupils are not so dilated?
@user-79be36 Hi! Your colleagues has let us know that we did not respond to one of your questions here in discord, and that you needed assistance.
Would you mind letting us know what is your question either here or at info@pupil-labs.com ?
Hello everyone! I just installed pupil capture on linux but it seems the device could not connect. Any suggestions?
Hi @user-39737f ๐. Please follow these instructions in the first instance: https://docs.pupil-labs.com/core/software/pupil-capture/#linux
Hello, is there a wiring diagram for the Pupil Core right and left eye sensors? I want to wire the sensors to my own Raspberry Pi SBC. Additionally, what is the power draw for each sensor (how much amperage does each sensor use)?
Hi @user-b1befe! We don't have a wiring diagram available to share, but you can connect your cameras to the Pi using a USB cable. The standard USB 2.0 power specifications are applicable. May I ask what you plan to do with your Pi? Are you intending to just collect eye camera streams or do you have other plans?
Regarding the timestamps and the events, do they generally come unorderered? Or can it be attributed to low confidence? I typically see timestamps in recent_events that are later than timestamps that come in the next list of recent_events.
Hi Neil. Thank you. What is the power draw for each sensor?
Hey Neil, just wanted to follow up on this. Please let me know,
Hello everyone ! I'm using pupilcore with LSL for an experimentation. I collect the data via LSL and via pupil capture, using the LSLRelay plugin. I have the gaze positions recorded via LSL in my xdf file, but also the .csv generated by pupilPlayer after processing the data acquired via pupil capture. I noticed that the timestamp associated with the LSL data goes backward sometimes (fig 1 shows the timestamp diffs with negative values ) while it does not with the processed pupilplayer file (fig 2, also timestamp diffs). I therefore wonder 1/ Why do I have these negative values in the LSL file and 2/ how does pupilplayer adjust the timestamp to obtain only positive diffs ? Many thanks for your answers !
Are you using a library for processing the xdf?
Also, what kind of USB is needed to connect the sensors to the Pi (2.0, 3.0, Type-A, Type-B, etc.)?
USB 2.0. The connection at the camera is a JST type SH (1.0mm). If you haven't already purchased a Core system and want to buy the cameras individually, we ship a USB-A to JST cable with each camera.
I would also like to reiterate my question about what you're looking to do with the Pi. Are you intending to just collect eye camera streams or do you have other plans?
I don't know the individual power draw of the eye cameras, but the combined draw of the eye cameras, their IR illuminators, and the scene camera together has to be < 500mA.
As far as interfacing goes, I believe that if you separate the eye cameras from the rest of the Core headset, you'll be looking at JST-style connectors which won't directly plug into a Raspberry Pi. You'd have to solder the wires onto a USB-A plug
Hi
This is Manuel from Universidad Politรฉcnica de Valencia (Spain)
I have pupil w120 e200b
and I can not finish the calibration step
as soon as yon can, could do you help me, pls?
in the 2nd image, you can see the calibration stop at 4th movement
Hi @user-a81678 ! Could you please navigate to the folder pupil_capture_settings, copy the capture.log after the crash and share it with us at [email removed]
ok. I will go
I canยดt find this folder or file in Pupil v3.5.1 folder
Each Pupil Core software creates its own user directory. It is directly placed in your user's home directory and follows this naming convention: pupil_<name>_settings, e.g. pupil_capture_settings.
Alternatively, you can try using Windows search to locate this folder.
One more thing, just to be on the same page, does the software crash during the calibration or does it simply not finish/detect the marker?
@user-d407c1
not finish/detect the marker
Ok! then we might not need the log file, could it be simply that the scene camera can't see the marker? perhaps you need to rotate it slightly down?
@miguel(pupli Labs)
Miguel, it works now like you have told me!!
thank you!!
Hi This is Karan here from Universitat de Barcelona. We are using Pupil core for our neuromarketing experiment.
I had certain doubts regarding the data analysis.
We tried a lot but cannot understand how to proceed with the data analysis.
Also, can we make heat maps with the pupil labs free software?
Heat maps with Pupil Core can be created using the Surface Tracker plugin (see: https://docs.pupil-labs.com/core/software/pupil-player/#surface-tracker). If you have other specific questions, let us know! ๐
Please let me know!
Thanks ๐
Hello! I wanted to learn where I can find the serial number for my pupil core device. Can I see it on my hardware or through the pupil capture software?
Hi, @user-5138c1 - Pupil Core units do not have serial numbers